SQL Server GETDATE() Function: Practical “Now” Without Time Bugs

Production bugs around time almost never come from “not knowing SQL Server has dates.” They come from subtle mismatches: local time vs UTC, datetime rounding, daylight-saving jumps, non-sargable filters, or mixing app clocks with database clocks. I’ve watched teams lose hours chasing “missing” records that were actually filtered out by a time boundary that shifted at midnight, or by a report server running in a different time zone than the database.

GETDATE() is one of the simplest tools in SQL Server, and that’s exactly why it’s easy to over-trust. When you use it well, you get reliable audit timestamps, consistent “as-of” queries, and predictable defaults without relying on the application layer. When you use it casually, you end up storing ambiguous local times and writing queries that look correct but quietly return the wrong slice of data.

I’ll walk through what GETDATE() returns, how it differs from closely related functions, and the patterns I use in 2026 for schema design, auditing, and time-safe querying—plus the mistakes I still see in real codebases and how I prevent them.

What GETDATE() actually returns (and why the “format” can mislead you)

GETDATE() returns the current date and time from the SQL Server instance’s system clock.

  • Return type: datetime
  • Typical display in SSMS: YYYY-MM-DD hh:mm:ss.mmm (for example: 2021-01-03 14:42:58.970)

A key detail: the “format” you see is not a contract of the function. It’s a presentation choice by your client/tool plus your session settings. The value is a datetime number under the hood, and it can be rendered differently depending on locale and conversion style.

Here’s the canonical baseline query:

SELECT GETDATE() AS currentlocaldatetime;

If you need a stable string representation (for logs, exports, or deterministic formatting), I don’t rely on implicit formatting. I explicitly convert:

SELECT

GETDATE() AS as_datetime,

CONVERT(varchar(23), GETDATE(), 121) AS iso_121, -- yyyy-mm-dd hh:mi:ss.mmm

CONVERT(varchar(33), SYSDATETIME(), 126) AS iso_126; -- higher precision, ISO 8601-ish

Two practical rules I follow:

1) Store time as a time type (not a formatted string).

2) Format only at the boundary (UI, exports, email, JSON response).

The hidden gotcha: string conversions depend on settings

Even if you never change your data types, you can still get confusing behavior if you convert dates to strings without an explicit style.

  • CAST(GETDATE() AS varchar(30)) is not stable across language settings.
  • CONVERT(varchar(30), GETDATE(), 121) is stable.

When I review code, I treat “date-to-string without a style” the same way I treat “implicit join”: it might work today, but it’s a reliability risk.

Precision and clock semantics: GETDATE() vs friends

If your database needs “now,” you have several choices. The differences that matter are (a) time zone basis (local vs UTC), (b) precision, and (c) return type.

Quick comparison

Function

Returns

Time zone basis

Precision

Notes —

GETDATE()

datetime

Local (server)

~3.33ms increments (rounded)

Most common, lower precision CURRENT_TIMESTAMP

datetime

Local (server)

Same as GETDATE()

ANSI-style synonym for GETDATE() SYSDATETIME()

datetime2(7)

Local (server)

100ns

High precision GETUTCDATE()

datetime

UTC

~3.33ms increments (rounded)

UTC, but still datetime SYSUTCDATETIME()

datetime2(7)

UTC

100ns

My default choice for new systems SYSDATETIMEOFFSET()

datetimeoffset(7)

Local + offset

100ns

Captures offset explicitly

A common misconception I still see in code reviews: CURRENT_TIMESTAMP is not UTC in SQL Server. It’s equivalent to GETDATE() (local server time).

When GETDATE() is “good enough”

I happily use GETDATE() when:

  • I’m in a legacy schema already using datetime.
  • I’m writing an ad-hoc administrative query and don’t care about sub-millisecond precision.
  • I need local server time specifically (for example, a schedule tied to local business hours on that server).

When I reach for something else

For new designs, I prefer UTC and a modern storage type:

  • If I need UTC: SYSUTCDATETIME() (or GETUTCDATE() if the schema is stuck on datetime).
  • If I need local time plus explicit offset: SYSDATETIMEOFFSET().
  • If I need high precision locally: SYSDATETIME().

“Now” consistency inside a statement

SQL Server treats these functions as non-deterministic, but within a single statement you should expect GETDATE() to behave like a constant (evaluated once for the statement). Across multiple statements, you can easily get drift.

If I need the same timestamp reused across several statements (common in ETL scripts), I capture it once:

DECLARE @now_utc datetime2(7) = SYSUTCDATETIME();

UPDATE dbo.Invoice

SET ProcessedAtUtc = @now_utc

WHERE ProcessedAtUtc IS NULL;

INSERT INTO dbo.InvoiceEvent(InvoiceId, EventType, OccurredAtUtc)

SELECT i.InvoiceId, ‘Processed‘, @now_utc

FROM dbo.Invoice AS i

WHERE i.ProcessedAtUtc = @now_utc;

That pattern is boring—and it saves you from “these two tables disagree by 4ms” investigations.

The datetime return type: what it implies for storage, comparisons, and rounding

Because GETDATE() returns datetime, it inherits all the quirks of datetime:

  • Limited precision compared to datetime2.
  • Rounding to fixed increments (which can matter in ordering and equality checks).
  • A smaller representable range than datetime2.

Why datetime rounding matters in practice

If you’re using GETDATE() for auditing, rounding usually doesn’t matter. If you’re using it as part of a “watermark” (incremental load boundary) or you compare timestamps for equality, it absolutely can matter.

Common failure pattern:

1) You store GETDATE() to mark “last processed time.”

2) You later query WHERE OccurredAt > @last_processed.

3) You miss rows that occurred within the same rounded bucket.

When I see watermark logic, I prefer one of these strategies:

  • Use datetime2(7) with SYSUTCDATETIME().
  • Or store an increasing surrogate (like an identity/bigint) alongside the time.
  • Or use >= with an additional tie-breaker.

Here’s a tie-breaker pattern that is resilient even when timestamps collide:

-- Suppose EventId is increasing and OccurredAtUtc is indexed

DECLARE @lasttime datetime2(7) = @plast_time;

DECLARE @lastid bigint = @plast_id;

SELECT TOP (1000) e.EventId, e.OccurredAtUtc, e.Payload

FROM dbo.EventLog AS e

WHERE

(e.OccurredAtUtc > @last_time)

OR (e.OccurredAtUtc = @lasttime AND e.EventId > @lastid)

ORDER BY e.OccurredAtUtc, e.EventId;

Even if two events share the exact same timestamp, you can still progress without skipping or duplicating.

Using GETDATE() as a DEFAULT (and what I do for createdat/updatedat)

The most common real use of GETDATE() is as a default value for an audit column.

A runnable example with GETDATE()

CREATE TABLE dbo.MessageLog

(

MessageId int IDENTITY(1,1) NOT NULL CONSTRAINT PK_MessageLog PRIMARY KEY,

MessageText varchar(150) NOT NULL,

GeneratedAt datetime NOT NULL CONSTRAINT DFMessageLogGeneratedAt DEFAULT (GETDATE())

);

INSERT INTO dbo.MessageLog(MessageText)

VALUES (‘Order placed by Jordan Lee‘);

INSERT INTO dbo.MessageLog(MessageText)

VALUES (‘Payment authorized for order #48219‘);

SELECT MessageId, MessageText, GeneratedAt

FROM dbo.MessageLog

ORDER BY MessageId;

That’s a solid baseline for legacy datetime schemas.

My 2026 default for new systems: UTC + datetime2

If I’m designing a new table today, I store UTC in datetime2, and I name it that way:

CREATE TABLE dbo.OrderAudit

(

OrderAuditId bigint IDENTITY(1,1) NOT NULL CONSTRAINT PK_OrderAudit PRIMARY KEY,

OrderId bigint NOT NULL,

EventType varchar(40) NOT NULL,

OccurredAtUtc datetime2(7) NOT NULL

CONSTRAINT DFOrderAuditOccurredAtUtc DEFAULT (SYSUTCDATETIME())

);

I like this for three reasons:

  • UTC timestamps compare cleanly across services, regions, containers, and replicas.
  • datetime2 avoids legacy datetime rounding behavior.
  • The column name prevents misunderstandings during incident response.

updated_at: my recommendation

If you need updated_at, you have three main approaches. I recommend making a deliberate choice instead of “whatever the last project did.”

Approach

Traditional method

Modern method (what I prefer)

Why

App-managed

App sets UpdatedAt

App sets UpdatedAtUtc using one time source

Works across multiple DBs, but requires discipline

Trigger

AFTER UPDATE sets UpdatedAt = GETDATE()

Trigger sets UpdatedAtUtc = SYSUTCDATETIME() and guards against recursion

Central enforcement, but needs careful testing

Computed

Not reliable for true update time

Avoid for audit trails

Doesn’t represent “when data changed” correctlyFor systems with multiple writers (services, jobs, manual admin updates), I prefer a trigger so the database enforces the rule. For systems with a single well-controlled writer, app-managed can be fine.

Here’s a trigger example that stamps UTC and avoids changing the timestamp when nothing changes:

CREATE TABLE dbo.Customer

(

CustomerId bigint IDENTITY(1,1) NOT NULL CONSTRAINT PK_Customer PRIMARY KEY,

Email varchar(320) NOT NULL,

DisplayName nvarchar(120) NOT NULL,

CreatedAtUtc datetime2(7) NOT NULL CONSTRAINT DFCustomerCreatedAtUtc DEFAULT (SYSUTCDATETIME()),

UpdatedAtUtc datetime2(7) NOT NULL CONSTRAINT DFCustomerUpdatedAtUtc DEFAULT (SYSUTCDATETIME())

);

GO

CREATE TRIGGER dbo.TRCustomerSetUpdatedAtUtc

ON dbo.Customer

AFTER UPDATE

AS

BEGIN

SET NOCOUNT ON;

DECLARE @now datetime2(7) = SYSUTCDATETIME();

UPDATE c

SET UpdatedAtUtc = @now

FROM dbo.Customer AS c

INNER JOIN inserted AS i

ON i.CustomerId = c.CustomerId

INNER JOIN deleted AS d

ON d.CustomerId = c.CustomerId

WHERE

-- Only stamp when something actually changed

(i.Email <> d.Email OR i.DisplayName <> d.DisplayName);

END;

GO

This is intentionally explicit. I’d rather maintain a short “fields that matter” list than accept updated_at noise that changes on every write.

Defaults aren’t a free pass: what they do and don’t protect you from

A default constraint like DEFAULT (GETDATE()) runs when the row is inserted and the column is omitted.

  • It does not automatically populate if you explicitly insert NULL (unless the column is NOT NULL).
  • It does not update itself on future updates.

So my baseline for audit columns is:

  • Make them NOT NULL.
  • Set a default.
  • Decide whether updates should be trigger-managed.

Querying with “now”: time windows that stay fast and correct

The biggest practical risk with GETDATE() isn’t the function itself—it’s the filters people write around it.

Sargable vs non-sargable patterns

When you write a time window filter, keep the function on the constant side, not on the column side. This keeps the predicate sargable (so an index on the datetime column can be used efficiently).

Good pattern (function applied to GETDATE(), not the column):

-- Last 7 days (rolling)

SELECT o.OrderId, o.PlacedAtUtc

FROM dbo.[Order] AS o

WHERE o.PlacedAtUtc >= DATEADD(day, -7, SYSUTCDATETIME());

Risky pattern (function applied to the column):

-- Avoid: wraps the column in a function

SELECT o.OrderId, o.PlacedAtUtc

FROM dbo.[Order] AS o

WHERE DATEDIFF(day, o.PlacedAtUtc, SYSUTCDATETIME()) <= 7;

That second version is a classic “works in dev, slow in prod” mistake once the table grows.

Rolling windows vs calendar windows (and why people mix them up)

When someone says “last 7 days,” I always clarify whether they mean:

  • Rolling: the previous 7 24-hour periods from now.
  • Calendar: from the start of the day 7 days ago until the end of today (or until now).

Both are legitimate, but the SQL differs.

Rolling 7 days:

WHERE e.OccurredAtUtc >= DATEADD(day, -7, SYSUTCDATETIME())

Calendar-ish 7 days (start-of-day boundary):

DECLARE @today_utc date = CAST(SYSUTCDATETIME() AS date);

DECLARE @start date = DATEADD(day, -7, @today_utc);

SELECT e.EventId

FROM dbo.EventLog AS e

WHERE e.OccurredAtUtc >= @start

AND e.OccurredAtUtc < DATEADD(day, 1, @today_utc);

The second version is often what business stakeholders expect when they say “include all of those days.”

Day boundaries without off-by-one bugs

If you need “today” in local time, decide what “today” means (server local? a business time zone?), then compute boundaries explicitly.

Example: rows from the start of the local day to now (server local time):

DECLARE @startoftoday date = CAST(GETDATE() AS date);

SELECT e.EventId, e.OccurredAt

FROM dbo.EventLog AS e

WHERE e.OccurredAt >= @startoftoday

AND e.OccurredAt < DATEADD(day, 1, @startoftoday);

I like this because it’s obvious, it’s index-friendly, and it avoids confusion about inclusive/exclusive end ranges.

The “end of day” trap: don’t use 23:59:59.997

I still run into predicates like:

  • WHERE OccurredAt <= ‘2026-02-11 23:59:59.997‘

That looks precise, but it’s fragile:

  • It bakes in datetime’s old precision assumptions.
  • It breaks if you switch to datetime2.
  • It’s harder to reason about than an exclusive end boundary.

My rule: use [start, end) ranges—>= start and < end.

If you store UTC (recommended), still serve local “business day” reports

I store UTC, then translate at query time when the report requires a local business day. In SQL Server, AT TIME ZONE is the tool I reach for.

Example: group orders by America/New_York local date while storing UTC:

SELECT

CAST(o.PlacedAtUtc AT TIME ZONE ‘UTC‘ AT TIME ZONE ‘Eastern Standard Time‘ AS date) AS LocalOrderDate,

COUNT(*) AS Orders

FROM dbo.[Order] AS o

GROUP BY CAST(o.PlacedAtUtc AT TIME ZONE ‘UTC‘ AT TIME ZONE ‘Eastern Standard Time‘ AS date)

ORDER BY LocalOrderDate;

This is exactly where local time belongs: at the reporting edge, not as the persisted truth.

Practical patterns I use with GETDATE() in real code

GETDATE() shows up in more than just “give me the current time.” These are the patterns I reach for most often when I’m in a codebase that uses local datetime.

1) “As-of” snapshots for consistent reporting

When a report runs for minutes and joins multiple large tables, I want the same “now” across the entire query.

DECLARE @as_of datetime = GETDATE();

SELECT a.AccountId, a.Balance, @as_of AS AsOfLocalTime

FROM dbo.Account AS a

WHERE a.IsActive = 1;

Even better, if the report touches multiple statements (temp tables, staging steps), the @as_of variable becomes your anchor.

2) Safe “expired” logic with explicit windows

I avoid “expires today” logic based on dates alone unless the product truly wants calendar expiration.

For rolling expiration (e.g., 30 days after creation):

SELECT t.TokenId

FROM dbo.AccessToken AS t

WHERE t.CreatedAt < DATEADD(day, -30, GETDATE());

For midnight-based expiration (calendar): store expiration as a timestamp and compare directly.

3) Soft-delete timestamps that don’t lie

If you soft-delete rows, store a deletion timestamp and who did it.

ALTER TABLE dbo.Customer

ADD DeletedAt datetime NULL,

DeletedBy sysname NULL;

-- Soft delete

UPDATE dbo.Customer

SET DeletedAt = GETDATE(),

DeletedBy = SUSER_SNAME()

WHERE CustomerId = @customer_id

AND DeletedAt IS NULL;

If you do this, also standardize your “active rows” filter and index it appropriately.

4) Throttling and backoff logic

Sometimes you need “don’t process this job more than once every N minutes.” This is a solid use of GETDATE().

UPDATE j

SET LastAttemptAt = GETDATE()

FROM dbo.Job AS j

WHERE j.JobId = @job_id

AND (j.LastAttemptAt IS NULL OR j.LastAttemptAt <= DATEADD(minute, -15, GETDATE()));

That predicate is readable, and it’s resilient.

Real-world edge cases: DST, replicas, and “time travel” bugs

If you’ve only tested time logic on a laptop at 2pm, you haven’t tested time logic.

Daylight Saving Time: the duplicate hour and the missing hour

When clocks fall back, local times repeat. When clocks spring forward, some local times never occur. If you store local GETDATE() timestamps, you can end up with:

  • Two different real moments that have the same local wall-clock time.
  • A “gap” where a local time can’t exist.

If you ever need to answer “what happened first?” under those conditions, local timestamps alone can’t guarantee correct ordering.

My rule: store UTC (SYSUTCDATETIME()), and if you must store a local representation, store it as a derived value (or store datetimeoffset so the offset is captured).

A subtle DST reporting bug: grouping by local date naïvely

This query:

SELECT CAST(e.OccurredAt AS date) AS LocalDate, COUNT(*)

FROM dbo.EventLog AS e

GROUP BY CAST(e.OccurredAt AS date);

looks fine if OccurredAt is local time. But if your servers move time zones, your SQL Agent job runs on a different machine, or you migrate databases, your “local” might not mean what you think it means.

If you store UTC, grouping by business local date should be explicit with AT TIME ZONE like I showed earlier.

Clock skew: the database isn’t always the only clock

In distributed systems, you might have:

  • App servers with NTP drift
  • Containers starting with wrong time config
  • Read replicas in different regions

If you mix app-generated timestamps with DB-generated timestamps, you’ll eventually see “events in the future” or negative durations.

To prevent that, I pick a single authority for persisted “now.” If the database is the system of record, I stamp times in the database (defaults/triggers). If the event is born outside the database (edge devices, offline clients), I store the client time separately and still stamp ReceivedAtUtc in the database.

Long-running statements and “as-of” consistency

For reports, you often want a single “as-of time” so the entire query is consistent.

I do this:

DECLARE @as_of datetime2(7) = SYSUTCDATETIME();

-- Use @as_of consistently throughout the report query

SELECT ...

WHERE SomeUtcTimestamp <= @as_of;

It also makes reruns reproducible: if a report looks wrong, you can re-run it “as of” the same timestamp and compare.

“Time travel” is real: the clock can go backwards

Most developers assume GETDATE() always increases. It usually does—but it’s not a promise.

If the OS clock is adjusted (manual change, virtualization issues, NTP step adjustments), GETDATE() can jump forward or backward. In high-integrity scenarios (financial ordering, distributed tracing correlation), I avoid relying on “now is monotonic” and instead store:

  • A sequence (identity/bigint), or
  • A database-generated ID like a bigint key, or
  • Both a timestamp and a durable ordering key.

Performance considerations: indexing, partitions, and parameterization

Time-based filters are some of the most common predicates in production systems. The good news is: they can be extremely fast—if your queries are written in a way the optimizer can use.

Index-friendly patterns I standardize

If you have an index on OccurredAtUtc, these patterns tend to behave well:

  • WHERE OccurredAtUtc >= @start AND OccurredAtUtc < @end
  • WHERE OccurredAtUtc >= DATEADD(day, -7, SYSUTCDATETIME())

These patterns often enable index seeks and efficient range scans.

Patterns I avoid (because they force scans)

  • WHERE CONVERT(date, OccurredAtUtc) = @some_date
  • WHERE DATEDIFF(day, OccurredAtUtc, SYSUTCDATETIME()) <= 7

Both apply a function to the column, which typically blocks index seeks.

Partitioning and retention jobs

If you partition large tables by time (daily/monthly partitions), your purge logic often uses “now” to decide what to retire.

Even without partitioning, a retention job should still use the same sargable boundary pattern:

-- Purge anything older than 90 days

DELETE TOP (10000) e

FROM dbo.EventLog AS e

WHERE e.OccurredAtUtc < DATEADD(day, -90, SYSUTCDATETIME());

I use batching (TOP) when the table is large to avoid long blocking and log spikes.

Parameter sniffing and “now”

GETDATE() is evaluated at runtime, which is usually what you want. But if you put it in a stored procedure that gets compiled under unusual conditions (very selective or not selective), you can see plan instability.

My approach is pragmatic:

  • Keep predicates sargable.
  • If a proc is highly sensitive to selectivity, consider capturing boundaries into local variables.

Example:

DECLARE @cutoff datetime2(7) = DATEADD(day, -7, SYSUTCDATETIME());

SELECT ...

WHERE OccurredAtUtc >= @cutoff;

This can sometimes reduce weirdness in plans and makes debugging easier because you can print the cutoff.

Time zone strategy: making GETDATE() safe in multi-region systems

GETDATE() is local time. That’s fine—until it isn’t.

My default rule

  • Persist UTC.
  • Convert to local only at the edges.

When I can’t change the schema (legacy), I at least do the next best thing:

  • Document the meaning: “server local time” is not the same as “New York time.”
  • Name columns clearly (CreatedAtLocal vs CreatedAtUtc).
  • Keep conversion logic centralized.

When datetimeoffset is worth it

I use datetimeoffset(7) when I need to preserve “what offset did we believe at the time?” (for example, user-entered appointment times with an offset, or legal/audit records where the offset matters).

It’s not a magic solution—time zone names still matter, DST rules change historically—but it captures more information than a bare local datetime.

Testing time logic: keeping queries deterministic

Time is the enemy of reproducible tests.

What I avoid in tests

  • Tests that depend on the current wall clock.
  • Queries that call GETDATE() repeatedly and assume stable results.

What I do instead

  • Capture “now” into a variable and pass it through.
  • In app code, inject a clock or pass @now as a parameter to stored procedures.

In SQL-only test harnesses, even a simple pattern like this helps:

DECLARE @now datetime2(7) = ‘2026-02-11T15:00:00.0000000‘;

SELECT ...

WHERE OccurredAtUtc <= @now;

That turns a flaky test into a deterministic one.

Common mistakes I still see (and the fixes I actually ship)

1) Storing local time without labeling it

If you store local time, name it as local time (CreatedAtLocal). Better: don’t store local time as your primary timestamp.

Fix I ship: migrate to CreatedAtUtc datetime2(7) NOT NULL DEFAULT (SYSUTCDATETIME()), keep old column for compatibility during the transition, and backfill.

2) Using datetime when precision or range matters

datetime rounds to fixed increments; datetime2 is more precise and has a larger range.

Fix I ship: new columns use datetime2(7); existing datetime columns remain unless there’s a strong reason to change.

3) Formatting timestamps in SQL for APIs

I still see endpoints that return varchar timestamps produced by CONVERT in SQL.

Fix I ship: return a time type to the app, format in the application layer, and standardize on ISO 8601 in the API.

4) Non-sargable time predicates

DATEDIFF on the column and CONVERT(date, Column) in the WHERE clause are the two biggest offenders.

Fix I ship: rewrite filters as range predicates on the raw column, using precomputed boundaries.

5) Expecting GETDATE() to be deterministic

You can’t use GETDATE() in persisted computed columns or indexed views expecting stability.

Fix I ship: store the value at write time (default/trigger/app) and index the stored column.

6) Equality comparisons against “now”

I see code like:

WHERE CreatedAt = GETDATE()

That is almost never correct, because:

  • The value won’t match exactly (especially with rounding).
  • Even if it did in dev, it won’t in production.

Fix I ship: replace equality with a window (>= start AND < end) or use a key-based lookup.

What I recommend you standardize in 2026

If you want fewer timestamp bugs this year, standardize a small set of conventions and stick to them.

  • Default storage: datetime2(7) in UTC, named with ...Utc, defaulted with SYSUTCDATETIME().
  • Audit fields: CreatedAtUtc always; UpdatedAtUtc only if you have a clear use case.
  • Reporting: convert UTC to the business time zone at query time with AT TIME ZONE.
  • Query patterns: range predicates with explicit boundaries; avoid wrapping the datetime column in functions.
  • Multi-statement scripts: capture @now once and reuse it.

When you’re maintaining an older SQL Server codebase, GETDATE() is still a practical tool—especially for defaults in datetime tables and quick operational queries. But if you’re building anything that spans multiple services, multiple regions, or long-lived audit trails, I treat GETDATE() as a local convenience—not the foundation of truth.

A quick cheat sheet: the decisions I want you to make explicitly

If you take nothing else from this guide, make these choices explicit in your project:

1) What is “now” in your system: local server time (GETDATE()), UTC (SYSUTCDATETIME()), or local-with-offset (SYSDATETIMEOFFSET())?

2) What type do you store: legacy datetime or modern datetime2/datetimeoffset?

3) How do you query time windows: sargable range boundaries or column-wrapping functions?

4) How do you ensure consistency: capture @now per statement, per transaction, or per report?

Once those are standardized, GETDATE() becomes what it should be: a simple function that behaves predictably, rather than a quiet source of midnight incidents.

Scroll to Top