SQL Server CRUD Operations: Practical Patterns for 2026

When a production feature breaks at 2 a.m., the fix is rarely a fancy algorithm. Most of the time, it’s a straightforward data change: create a missing record, read what actually exists, update a wrong value, or delete a duplicate. That’s CRUD. In my experience, the teams that treat CRUD as a first‑class skill ship faster and recover from incidents with less drama.

In this post, I’ll show you how I structure SQL Server CRUD operations in real projects. You’ll see a clean table setup, idiomatic INSERT/SELECT/UPDATE/DELETE examples, common pitfalls, and performance notes I actually use. I’ll also point out where modern workflows in 2026 help—AI assistants that draft scripts, database project tooling that catches mistakes, and safe deployment patterns that prevent fat‑finger errors. You’ll walk away with runnable snippets and the reasoning behind each choice so you can apply it immediately.

How I think about CRUD in SQL Server

CRUD is just data manipulation: CREATE, READ, UPDATE, DELETE. In SQL Server terms, that’s DML. I treat it as the “language of record truth.” If you can’t trust your CRUD statements, you can’t trust your application.

I like a simple model when teaching teams: a table is a ledger; INSERT creates a line, SELECT reads it, UPDATE corrects it, DELETE removes it. You want every action to be explicit, predictable, and safe.

A few principles I stick to:

  • Always specify column lists in INSERT. It makes schema changes safer.
  • Use parameters and transactions for anything that modifies data.
  • Prefer targeted WHERE clauses over “best guess” filters.
  • Validate with a SELECT before and after UPDATE or DELETE.

That mindset keeps you out of trouble when you’re moving fast.

Setup: database and Employee table

I’ll use a simple Employee table to keep the focus on CRUD itself. You can copy these queries into SQL Server Management Studio (SSMS), Azure Data Studio, or any SQL client.

CREATE DATABASE geeks;

GO

USE geeks;

GO

CREATE TABLE dbo.Employee (

EmpId INT IDENTITY(1,1) NOT NULL,

EmpNo VARCHAR(10) NOT NULL,

SSN VARCHAR(10) NULL,

DOB DATE NULL,

CreatedDt DATETIME2(0) NOT NULL,

CreatedBy VARCHAR(10) NOT NULL,

CONSTRAINT PK_Employee PRIMARY KEY (EmpId),

CONSTRAINT UQEmployeeEmpNo UNIQUE (EmpNo)

);

A few notes on the table definition:

  • I use DATETIME2(0) instead of DATETIME for better precision control.
  • EmpNo is unique; it’s a real‑world identifier you’ll likely query by.
  • I explicitly name constraints. You’ll thank yourself during troubleshooting.

If you’re working in a mature system, you’ll likely have more columns (department, status, updated timestamps, etc.). The CRUD patterns below scale cleanly.

CREATE (INSERT): inserting rows safely

INSERT is deceptively simple. The most common failures I see are missing column lists, mismatched order, and silent truncation. I avoid all three by always listing columns and validating data types.

Basic INSERT

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘1‘, ‘1234567890‘, ‘2000-01-01‘, SYSUTCDATETIME(), ‘system‘);

If you look closely, I used SYSUTCDATETIME() rather than GETDATE(). I prefer UTC in multi‑region systems; it removes ambiguity during daylight savings transitions and cross‑region queries.

Insert with different column order

You can reorder columns as long as values match the column list.

INSERT INTO dbo.Employee (SSN, DOB, CreatedDt, CreatedBy, EmpNo)

VALUES (‘0123456789‘, ‘1999-01-01‘, SYSUTCDATETIME(), ‘system‘, ‘2‘);

This is one reason I never omit column lists. If you rely on the default column order and the schema changes, you’ll insert the wrong data without an error.

Insert explicit identity value

Sometimes you need to restore data or align IDs during migration. SQL Server requires IDENTITY_INSERT for that.

SET IDENTITY_INSERT dbo.Employee ON;

INSERT INTO dbo.Employee (EmpId, EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (3, ‘3‘, ‘0123456789‘, ‘1999-01-01‘, SYSUTCDATETIME(), ‘system‘);

SET IDENTITY_INSERT dbo.Employee OFF;

I only do this in controlled scripts. In application code, let SQL Server generate the identity value and return it.

Insert multiple rows

Batch inserts reduce network round‑trips and usually perform better.

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES

(‘4‘, ‘1231544984‘, ‘2000-02-01‘, SYSUTCDATETIME(), ‘system‘),

(‘5‘, ‘5487946598‘, ‘2001-01-01‘, SYSUTCDATETIME(), ‘system‘),

(‘6‘, ‘8789453115‘, ‘2002-01-01‘, SYSUTCDATETIME(), ‘system‘);

For large loads, I recommend BULK INSERT, OPENROWSET, or bcp. In 2026, most teams use an ingestion pipeline rather than raw multi‑row INSERTs, but the syntax above is still useful for seed data.

Insert from another table

This is how I copy or backfill data.

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

SELECT EmpNo, SSN, DOB, SYSUTCDATETIME(), ‘system‘

FROM dbo.Employee_history

WHERE EmpNo NOT IN (SELECT EmpNo FROM dbo.Employee);

If you’re moving big datasets, prefer INSERT INTO ... SELECT with a proper index on the join keys.

Return generated values

In real apps I almost always need the generated identity value. I use SCOPE_IDENTITY() or the OUTPUT clause, depending on context.

DECLARE @NewEmpId INT;

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘7‘, ‘999887777‘, ‘1998-04-02‘, SYSUTCDATETIME(), ‘system‘);

SET @NewEmpId = SCOPE_IDENTITY();

SELECT @NewEmpId AS EmpId;

For multi‑row inserts, OUTPUT is safer and more flexible:

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

OUTPUT inserted.EmpId, inserted.EmpNo

VALUES

(‘8‘, ‘111222333‘, ‘1994-07-01‘, SYSUTCDATETIME(), ‘system‘),

(‘9‘, ‘222333444‘, ‘1992-01-15‘, SYSUTCDATETIME(), ‘system‘);

I use OUTPUT when I need to link child records or return data to an API without extra SELECTs.

READ (SELECT): querying with intent

SELECT is where you either clarify the truth or confuse everyone. I focus on intent: what do you want, and why?

Basic SELECT

SELECT EmpId, EmpNo, SSN, DOB, CreatedDt, CreatedBy

FROM dbo.Employee;

This is fine for tiny tables or development, but it scales poorly. In production, you should filter, paginate, or both.

Filtered SELECT with explicit columns

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpNo = ‘5‘;

I use explicit column lists so my app isn’t bound to schema changes. This keeps your API contracts stable even if your table grows.

Paginated SELECT (OFFSET/FETCH)

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

ORDER BY EmpId

OFFSET 0 ROWS FETCH NEXT 20 ROWS ONLY;

This pattern is common for APIs. The ORDER BY is non‑optional; without it, pagination is undefined.

Keyset pagination (faster for large tables)

OFFSET/FETCH gets slower the deeper you go. For large lists, I use keyset pagination.

DECLARE @LastEmpId INT = 1000;

SELECT TOP (20) EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpId > @LastEmpId

ORDER BY EmpId;

This avoids scanning past thousands of rows and keeps the query stable under load.

SELECT with computed columns

SELECT

EmpId,

EmpNo,

DOB,

DATEDIFF(YEAR, DOB, CAST(SYSUTCDATETIME() AS DATE)) AS ApproxAge

FROM dbo.Employee

WHERE DOB IS NOT NULL;

When you compute values, be aware of performance. Expressions can prevent index use; if a computed value is used often, consider a persisted computed column.

Safe “exists” check

IF EXISTS (SELECT 1 FROM dbo.Employee WHERE EmpNo = ‘5‘)

BEGIN

SELECT EmpId, EmpNo, DOB FROM dbo.Employee WHERE EmpNo = ‘5‘;

END

In app code, I usually avoid IF EXISTS and just run the SELECT. But for scripts, this can avoid misleading “0 row” confusion when you’re troubleshooting.

SELECT for troubleshooting

When I’m on incident response, I want fast answers. Two patterns I use a lot:

-- Quick row counts by field

SELECT CreatedBy, COUNT(*) AS Cnt

FROM dbo.Employee

GROUP BY CreatedBy

ORDER BY Cnt DESC;

-- Look for duplicates (should not exist due to unique constraint)

SELECT EmpNo, COUNT(*) AS Cnt

FROM dbo.Employee

GROUP BY EmpNo

HAVING COUNT(*) > 1;

These are small, focused queries that surface data issues immediately.

UPDATE: precise, tested, and reversible

UPDATE is the most dangerous CRUD operation. A missing WHERE clause can wreck a table. I use a standard “preview then execute” workflow.

Preview before UPDATE

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpNo = ‘4‘;

Update with targeted WHERE

UPDATE dbo.Employee

SET DOB = ‘2000-03-01‘

WHERE EmpNo = ‘4‘;

Validate after UPDATE

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpNo = ‘4‘;

I keep these in one transaction for safety when working in production or during incident response.

BEGIN TRANSACTION;

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpNo = ‘4‘;

UPDATE dbo.Employee

SET DOB = ‘2000-03-01‘

WHERE EmpNo = ‘4‘;

SELECT EmpId, EmpNo, DOB

FROM dbo.Employee

WHERE EmpNo = ‘4‘;

-- ROLLBACK TRANSACTION; -- use during rehearsal

COMMIT TRANSACTION;

That rollback line is a lifesaver when you’re testing a fix in a live window.

Update multiple columns

UPDATE dbo.Employee

SET SSN = ‘111223333‘,

CreatedBy = ‘admin‘

WHERE EmpNo = ‘5‘;

I keep the WHERE clause as tight as possible and use a unique key when I can. If you have to update by a non‑unique column, you should intentionally check the row count.

Row count validation

DECLARE @Rows INT;

UPDATE dbo.Employee

SET CreatedBy = ‘migration‘

WHERE CreatedBy = ‘system‘;

SET @Rows = @@ROWCOUNT;

IF @Rows > 100

BEGIN

-- Safety stop in scripts

RAISERROR(‘Too many rows updated: %d‘, 16, 1, @Rows);

END

That pattern prevents disasters when filters are too broad.

Optimistic concurrency with rowversion

If multiple writers can update the same row, I add a rowversion column and use it in WHERE clauses to avoid silent overwrites.

ALTER TABLE dbo.Employee ADD RowVer ROWVERSION;

-- Read

SELECT EmpId, EmpNo, SSN, RowVer

FROM dbo.Employee

WHERE EmpNo = ‘5‘;

-- Update only if rowversion matches

UPDATE dbo.Employee

SET SSN = ‘555666777‘

WHERE EmpNo = ‘5‘

AND RowVer = 0x00000000000007D3;

If the update affects 0 rows, the row changed since you read it. That gives you a clean concurrency signal instead of data drift.

DELETE: handle with surgical care

DELETE is often the right tool, but I treat it like a scalpel. There’s no undo unless you’re inside a transaction or have backups.

Preview and delete a single row

SELECT EmpId, EmpNo

FROM dbo.Employee

WHERE EmpNo = ‘6‘;

DELETE FROM dbo.Employee

WHERE EmpNo = ‘6‘;

Delete with transaction and validation

BEGIN TRANSACTION;

SELECT EmpId, EmpNo

FROM dbo.Employee

WHERE EmpNo IN (‘4‘, ‘5‘);

DELETE FROM dbo.Employee

WHERE EmpNo IN (‘4‘, ‘5‘);

SELECT EmpId, EmpNo

FROM dbo.Employee

WHERE EmpNo IN (‘4‘, ‘5‘);

-- ROLLBACK TRANSACTION;

COMMIT TRANSACTION;

I intentionally keep a rollback option until I’m 100% confident.

When NOT to delete

In many systems, physical deletes cause audit gaps, broken foreign keys, or compliance risk. In those cases, I use soft deletes.

ALTER TABLE dbo.Employee ADD IsDeleted BIT NOT NULL CONSTRAINT DFEmployeeIsDeleted DEFAULT (0);

UPDATE dbo.Employee

SET IsDeleted = 1

WHERE EmpNo = ‘6‘;

If you go this route, make sure your SELECT queries filter out deleted rows.

Transactions and isolation: safe CRUD in real systems

CRUD statements don’t live in isolation. They’re part of transactions, and transaction settings can change behavior under load.

A simple, safe transaction

BEGIN TRANSACTION;

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘7‘, ‘999887777‘, ‘1998-04-02‘, SYSUTCDATETIME(), ‘system‘);

UPDATE dbo.Employee

SET CreatedBy = ‘hr‘

WHERE EmpNo = ‘7‘;

COMMIT TRANSACTION;

I use this pattern whenever I need multiple statements to either succeed together or fail together.

Isolation levels in practice

Most OLTP systems use READ COMMITTED. If you see blocking or deadlocks, I first consider READ COMMITTED SNAPSHOT at the database level. It reduces reader/writer contention without losing consistency for most use cases.

Example to test a higher isolation level for a specific block:

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

BEGIN TRANSACTION;

SELECT EmpId, EmpNo

FROM dbo.Employee

WHERE EmpNo BETWEEN ‘1‘ AND ‘10‘;

-- Other statements

COMMIT TRANSACTION;

Use this carefully. It can lock ranges and impact throughput.

A practical error handling wrapper

I use TRY/CATCH plus XACT_ABORT for scripts so errors don’t leave open transactions.

SET XACT_ABORT ON;

BEGIN TRY

BEGIN TRANSACTION;

UPDATE dbo.Employee

SET CreatedBy = ‘hr‘

WHERE EmpNo = ‘7‘;

COMMIT TRANSACTION;

END TRY

BEGIN CATCH

IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION;

THROW;

END CATCH;

This pattern prevents half‑finished updates and makes failures explicit.

Common mistakes I see (and how I avoid them)

These are the gotchas that repeatedly cause production issues.

1) Missing WHERE clause in UPDATE or DELETE

I always run a SELECT with the exact same filter first. If I can’t explain the filter to another engineer in one sentence, it’s too risky.

2) Implicit column order in INSERT

Schemas evolve. If you omit column lists, your script may “work” but insert corrupted data. I never omit columns unless I’m using a SELECT * INTO for a scratch table.

3) Identity insert without proper control

Turning IDENTITY_INSERT on in a busy database can block other inserts. If you need it, schedule it during a low‑traffic window.

4) Using GETDATE in multi‑region systems

You’ll fight time zone bugs forever. Use UTC and convert at the edge.

5) Updating by non‑unique fields

If you need to update multiple rows, make it explicit and check @@ROWCOUNT. If you intend one row, use a unique key.

6) Overusing SELECT *

It’s a performance and stability problem. It can increase network payloads and break consumers when columns change. I list columns explicitly in anything that matters.

7) Relying on implicit conversions

If a VARCHAR date becomes a DATETIME by implicit conversion, you’ll eventually get locale‑based bugs. I cast explicitly in scripts.

Performance patterns that actually matter

CRUD performance is mostly about indexing, row size, and transaction scope. Here’s how I keep things fast.

Indexing for READ vs WRITE

  • If you read by EmpNo, index it. That’s a simple non‑clustered index.
  • If you frequently query by DOB and EmpNo, consider a composite index.
CREATE INDEX IXEmployeeEmpNo ON dbo.Employee (EmpNo);

CREATE INDEX IXEmployeeDOB_EmpNo ON dbo.Employee (DOB, EmpNo);

Every index speeds up reads but slows down inserts and updates. For small tables, it doesn’t matter; for big ones, it does. I review index strategy whenever I see write latency over typical 10–15ms ranges for single‑row operations.

Batch writes to reduce overhead

If you’re inserting or updating many rows, batch them. I often use batches of 500–2,000 rows depending on row size and lock pressure.

Avoid scalar functions in WHERE clauses

If you wrap indexed columns in functions, SQL Server can’t use the index effectively. Prefer range filters on raw columns.

Bad:

SELECT * FROM dbo.Employee WHERE YEAR(DOB) = 2000;

Better:

SELECT * FROM dbo.Employee WHERE DOB >= ‘2000-01-01‘ AND DOB < '2001-01-01';

Use parameterized queries

Parameter sniffing can be a real issue for some workloads, but ad hoc SQL is worse. I use parameters and only reach for query hints when I’ve measured a real problem.

Keep transactions short

Long transactions hold locks and block readers. In practice, I try to keep transactional work under a few hundred milliseconds for OLTP. If you need minutes, consider staging data first and swapping it in with a short transaction.

Modern workflows in 2026 that help CRUD

I don’t rely on tools blindly, but I do use them to reduce errors.

AI‑assisted SQL drafting

I use AI assistants to draft SQL, then I review like a code review. The assistant can build the skeleton quickly, but I verify filters, indexes, and transactions. Think of it like a junior engineer who works instantly but needs supervision.

Database projects and migrations

For production systems, I keep database objects in a project (for example, SQL Server Data Tools style projects or modern migration frameworks). That gives me:

  • Schema diffs during code review
  • Repeatable, versioned changes
  • Automated checks for unsafe operations

Guardrails for prod changes

In 2026, it’s standard to use:

  • Read replicas for analytics
  • Change scripts with dry‑run steps
  • Approval gates for destructive statements

Those guardrails don’t replace good CRUD skills, but they reduce blast radius when mistakes happen.

CRUD patterns for real scenarios

Here are a few patterns I use in real systems.

Idempotent insert (upsert‑like pattern)

If you need to ensure a row exists without duplicating it:

IF NOT EXISTS (SELECT 1 FROM dbo.Employee WHERE EmpNo = ‘7‘)

BEGIN

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘7‘, ‘999887777‘, ‘1998-04-02‘, SYSUTCDATETIME(), ‘system‘);

END

For higher concurrency, I use MERGE cautiously or a unique constraint with a try‑catch insert pattern. MERGE still needs careful testing in SQL Server.

Update with audit trail

When auditing is required, I write to an audit table in the same transaction.

BEGIN TRANSACTION;

UPDATE dbo.Employee

SET SSN = ‘111223333‘

WHERE EmpNo = ‘5‘;

INSERT INTO dbo.EmployeeAudit (EmpNo, FieldName, OldValue, NewValue, ChangedAt, ChangedBy)

VALUES (‘5‘, ‘SSN‘, ‘5487946598‘, ‘111223333‘, SYSUTCDATETIME(), ‘admin‘);

COMMIT TRANSACTION;

The key is that both changes commit together or fail together.

Soft delete with filtered index

If you go the soft delete route, I add a filtered index to keep common queries fast.

CREATE INDEX IXEmployeeActive

ON dbo.Employee (EmpNo)

WHERE IsDeleted = 0;

Then most queries can read from the filtered index quickly:

SELECT EmpId, EmpNo

FROM dbo.Employee

WHERE IsDeleted = 0 AND EmpNo = ‘7‘;

Upsert with try‑catch

When I need safe inserts under concurrency without MERGE, I use a simple try‑catch pattern.

BEGIN TRY

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘10‘, ‘123123123‘, ‘1990-12-12‘, SYSUTCDATETIME(), ‘system‘);

END TRY

BEGIN CATCH

-- If it already exists, update the latest values instead

UPDATE dbo.Employee

SET SSN = ‘123123123‘

WHERE EmpNo = ‘10‘;

END CATCH;

This relies on a unique constraint to detect duplicates. It’s simple and robust in practice.

Table design choices that make CRUD easier

CRUD goes smoother when the table itself is designed well. I add guardrails so SQL Server helps me maintain correctness.

Default constraints

Defaults reduce code and avoid NULLs when you don’t need them.

ALTER TABLE dbo.Employee

ADD CONSTRAINT DFEmployeeCreatedDt DEFAULT (SYSUTCDATETIME()) FOR CreatedDt;

With this, I can omit CreatedDt from inserts if I want.

Check constraints

Simple checks prevent garbage data:

ALTER TABLE dbo.Employee

ADD CONSTRAINT CKEmployeeEmpNo_Format

CHECK (LEN(EmpNo) >= 1);

I keep checks small and fast. I don’t use them for business rules that change often.

Foreign keys and cascades

Foreign keys protect relationships, but cascading deletes can be dangerous if you’re not careful. I only enable cascading when I’m sure child records should always be removed with the parent.

Security and safety in CRUD

Security is part of CRUD, not an afterthought.

Least privilege

I create separate SQL roles for read‑only vs read‑write. The app shouldn’t have ALTER permissions in production.

Parameterized queries everywhere

I never build SQL by string concatenation in application code. Parameters prevent SQL injection and keep query plans stable.

Read/Write separation

If I have heavy analytics reads, I push them to a read replica. That protects write performance and reduces locking issues.

Monitoring and troubleshooting CRUD in production

CRUD doesn’t end at execution. You need visibility.

Log row counts and durations

In high‑value operations, I log @@ROWCOUNT and timing at the application layer. It helps diagnose “why did this update take 10 seconds today?” quickly.

Track deadlocks

When a deadlock happens, I capture the deadlock graph and look for two patterns: (1) inconsistent index usage between two queries, or (2) transactions touching tables in different orders. The fix is usually a consistent ordering or an index adjustment.

Use Query Store

Query Store is a lifesaver for understanding plan regressions. It’s one of the first features I enable on production databases.

CRUD for bulk operations

Single‑row operations are one thing; bulk operations are a different game.

Batch update pattern

WHILE 1 = 1

BEGIN

UPDATE TOP (1000) dbo.Employee

SET CreatedBy = ‘migration‘

WHERE CreatedBy = ‘system‘;

IF @@ROWCOUNT = 0 BREAK;

END

This limits lock duration and keeps the system responsive.

Staging tables

For large imports, I load data into a staging table first, validate it, then merge into the target table. That keeps the main table clean and reduces the risk of partial failures.

Realistic CRUD scenarios I see often

These are common in enterprise systems and come up repeatedly.

Fixing a missing record

When a critical reference record is missing, I insert it with a transaction and log the change. I also check whether it should exist based on upstream data, so it doesn’t get re‑deleted later.

Correcting bad data after an incident

I run a SELECT to confirm the scope, then update with a precise filter and check the row count. If the count is larger than expected, I stop immediately and investigate.

Cleaning up duplicates

I identify duplicates with a grouping query, then delete or consolidate. I avoid deleting blindly; if duplicates are real customers or employees, consolidation is usually safer than deletion.

Traditional vs modern CRUD approaches

I still use classic SQL, but workflows have changed. Here’s how I think about it:

Aspect

Traditional

Modern (2026) —

— Authoring

Manual SQL scripts

AI‑assisted drafts + human review Deployment

Run in prod manually

Migration pipelines + approvals Validation

Ad hoc SELECTs

Pre/post checks + automated tests Monitoring

Minimal logs

Query Store + structured app logs Rollback

Manual fixes

Automated backups + controlled rollback

The SQL syntax is the same, but the safety net is stronger.

Common pitfalls with MERGE

MERGE looks great for upserts, but I use it cautiously. Edge cases around concurrency, triggers, and unexpected matches can cause headaches. If I use it, I keep it minimal and test it with realistic data volumes.

MERGE dbo.Employee AS target

USING (SELECT ‘11‘ AS EmpNo, ‘888777666‘ AS SSN) AS source

ON target.EmpNo = source.EmpNo

WHEN MATCHED THEN

UPDATE SET SSN = source.SSN

WHEN NOT MATCHED THEN

INSERT (EmpNo, SSN, CreatedDt, CreatedBy)

VALUES (source.EmpNo, source.SSN, SYSUTCDATETIME(), ‘system‘);

I still prefer the try‑catch insert pattern when I only need “insert or update” and I want predictable behavior.

Testing CRUD changes before production

I always test CRUD changes in a lower environment with production‑like data volume. Here’s my checklist:

  • Validate the SELECT filter returns the expected rows
  • Run the update or delete in a transaction and verify changes
  • Check the row count against expectations
  • Measure execution time and index usage
  • Confirm no unexpected locks or blocking

This is boring work, but it prevents late‑night surprises.

A small CRUD checklist I use during incidents

When I’m on‑call and a quick fix is needed, I use this mental list:

  • Is the WHERE clause correct and specific?
  • Did I preview with a SELECT?
  • Do I have a rollback strategy?
  • Is the change logged or auditable?
  • Will this cause downstream effects?

If I can’t answer one of those questions, I slow down and get help.

Putting it all together: a full example

Here’s a realistic mini‑workflow that covers insert, read, update, and delete safely.

SET XACT_ABORT ON;

BEGIN TRY

BEGIN TRANSACTION;

-- Create

INSERT INTO dbo.Employee (EmpNo, SSN, DOB, CreatedDt, CreatedBy)

VALUES (‘12‘, ‘333444555‘, ‘1991-05-05‘, SYSUTCDATETIME(), ‘system‘);

-- Read

SELECT EmpId, EmpNo, SSN

FROM dbo.Employee

WHERE EmpNo = ‘12‘;

-- Update

UPDATE dbo.Employee

SET CreatedBy = ‘hr‘

WHERE EmpNo = ‘12‘;

-- Confirm

SELECT EmpId, EmpNo, CreatedBy

FROM dbo.Employee

WHERE EmpNo = ‘12‘;

COMMIT TRANSACTION;

END TRY

BEGIN CATCH

IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION;

THROW;

END CATCH;

This is the pattern I’d rather copy‑paste than improvise when time is tight.

Final thoughts

CRUD isn’t glamorous, but it’s the backbone of every data system. If your CRUD is precise, your systems are easier to debug, safer to change, and faster to ship. I’ve seen teams transform their reliability just by tightening their CRUD habits—explicit columns, careful WHERE clauses, transactions, and validation steps.

Use the examples above as a starting point, then adapt them to your real tables and workflows. The goal isn’t to memorize syntax; it’s to build habits that keep your data trustworthy even when you’re under pressure.

If you want, I can also add templates for stored procedures, API‑friendly parameter patterns, or a versioned migration example next.

Scroll to Top