PostgreSQL date_part() Function: Practical Guide, Edge Cases, and Performance

Most production databases eventually turn into time machines. Not because they’re magical, but because the questions you need to answer are time-shaped: “How many signups happened this week?”, “What hour do payments peak?”, “Which region sees the most activity on Mondays?”, “How long did shipments sit in transit?” If you store timestamps (you should), you also need a clean way to pull them apart without writing brittle string hacks or app-side loops.

In PostgreSQL, date_part() is one of the sharpest tools for that job. It extracts a single “subfield” (year, month, day, hour, etc.) from a date, timestamp, timestamptz, or interval. I reach for it when I need a scalar value I can group by, filter on, compare, or feed into another expression.

What I’ll do here is walk you through how date_part() behaves in the real world: return types and rounding, day-of-week traps, ISO week-year edge cases, timezone surprises, and how to keep queries fast when you’re extracting parts from millions of rows.

Why date_part() Shows Up Everywhere in Production SQL

I think about date_part() like a metal punch for a sheet of paper: you have a full timestamp, and you want one clean hole—just the hour, just the month, just the day-of-year—so you can sort things into buckets.

Common places I see it used:

  • Analytics queries: grouping signups by month, sessions by hour, errors by day-of-week.
  • Data quality checks: verifying that “enddate” is after “startdate”, finding rows that fall outside a business calendar.
  • Billing and reporting: extracting the year/quarter/month for statements, cohort labels, and cutoffs.
  • Data pipelines: generating derived columns during ETL, or in views/materialized views.

A key point: datepart() is not primarily about formatting. If what you need is “Jan 2026” on a dashboard, formatting functions like tochar() are often better. date_part() is for computation.

The Signature, the Return Type, and the Mental Model

The signature is simple:

date_part(field, source)
  • field is a text identifier like ‘year‘, ‘month‘, ‘dow‘, ‘epoch‘.
  • source is a date, timestamp, timestamptz, or interval.
  • The return type is double precision.

That last bullet is the first “gotcha” I teach people. You’ll often get whole numbers (like 2026 for year), but PostgreSQL still returns floating point. If you need an integer, cast explicitly.

select date_part(‘year‘, timestamp ‘2026-02-03 09:10:11‘);

-- 2026 (double precision)

select date_part(‘year‘, timestamp ‘2026-02-03 09:10:11‘)::int;

-- 2026 (integer)

A small, runnable playground

If you want to follow along with realistic data, I like to create a tiny events table:

drop table if exists app_event;

create table app_event (

event_id bigserial primary key,

user_id bigint not null,

event_name text not null,

occurred_at timestamptz not null

);

insert into appevent (userid, eventname, occurredat) values

(101, ‘signup‘, ‘2026-01-31 23:50:00-05‘),

(101, ‘login‘, ‘2026-02-01 00:05:00-05‘),

(202, ‘purchase‘, ‘2026-02-02 12:15:30-05‘),

(303, ‘login‘, ‘2026-02-03 09:10:11-05‘);

Now you can ask questions like “What hour do logins happen?”

select

datepart(‘hour‘, occurredat) as hour_local,

count(*)

from app_event

where event_name = ‘login‘

group by 1

order by 1;

Field Values You’ll Actually Use (and the Traps Around Them)

PostgreSQL supports many fields. I’ll focus on the ones that show up repeatedly, plus the ones that surprise people.

Year-ish fields: year, decade, century

  • ‘year‘ is the calendar year.
  • ‘decade‘ is the year divided by 10 (so 1987 becomes 198).
  • ‘century‘ is 20 for years 1901–2000, 21 for 2001–2100, etc.

Example:

select

date_part(‘century‘, timestamp ‘2020-01-01‘) as century,

date_part(‘decade‘, timestamp ‘2020-01-01‘) as decade,

date_part(‘year‘, timestamp ‘2020-01-01‘) as year;

I avoid storing decade/century as data; I compute it when needed. If you want “decade buckets” for reporting, a safer label is often (date_part(‘year‘, ts)::int / 10) * 10 which yields 1980, 1990, 2000, etc.

Month/day fields: month, day, doy

  • ‘month‘ is 1–12.
  • ‘day‘ is 1–31 (day-of-month).
  • ‘doy‘ is 1–366 (day-of-year).

doy is excellent for seasonal analysis (support volume across the year, energy usage, etc.). Just remember leap years exist, so day 60 is not always the same calendar date.

One practical trick: if you’re doing seasonal patterns, I’ll often group by doy but also keep a sample date so humans can interpret the bucket.

select

datepart(‘doy‘, occurredat)::int as doy,

min(occurredat::date) as exampledate,

count(*)

from app_event

group by 1

order by 1;

Time fields: hour, minute, second, milliseconds, microseconds

These behave as you’d expect, but note the floating-point return type and that seconds can include fractions.

select

date_part(‘hour‘, timestamp ‘2020-03-18 10:20:30.123‘) as h,

date_part(‘minute‘, timestamp ‘2020-03-18 10:20:30.123‘) as m,

date_part(‘second‘, timestamp ‘2020-03-18 10:20:30.123‘) as s;

-- s may show 30.123

If you’re grouping by seconds, cast carefully:

select datepart(‘second‘, ts)::int as secwhole, count(*)

from (values (timestamp ‘2026-02-03 09:10:11.999‘)) v(ts)

group by 1;

A more defensive pattern for whole-second bucketing is to explicitly floor the value (helpful if you ever deal with negative intervals or if rounding ever creeps in):

select floor(datepart(‘second‘, ts))::int as secwhole

from (values (timestamp ‘2026-02-03 09:10:11.999‘)) v(ts);

Day-of-week: dow vs isodow

This is where many dashboards go subtly wrong.

  • ‘dow‘: 0–6 where Sunday = 0.
  • ‘isodow‘: 1–7 where Monday = 1 and Sunday = 7.

If your business considers Monday the first day of the week (common in reporting), use isodow.

select

occurred_at,

datepart(‘dow‘, occurredat) as dow,

datepart(‘isodow‘, occurredat) as isodow

from app_event

order by occurred_at;

The trap isn’t just the numbering. It’s the downstream ordering. If you group by dow and order ascending, your chart goes Sun, Mon, Tue… which is not what many teams expect.

ISO year: isoyear

The ISO week-date system can assign the first few days of January to the previous ISO year, and the last few days of December to the next ISO year. That’s correct for ISO week reporting, but it’s shocking the first time you see it.

If you do “weekly reporting” where week 1 must start on Monday and you want ISO week semantics, pair isoyear with ISO week fields (often via tochar(ts, ‘IYYY-IW‘)). If you don’t need ISO semantics, stick to calendar years plus datetrunc(‘week‘, ...) in your chosen timezone.

Here’s an example that makes the edge case obvious:

select

d::date as d,

datepart(‘year‘, d)::int as calyear,

datepart(‘week‘, d)::int as isoweek,

datepart(‘isoyear‘, d)::int as isoyear,

tochar(d, ‘IYYY-IW‘) as isoyear_week

from generate_series(date ‘2025-12-28‘, date ‘2026-01-05‘, interval ‘1 day‘) s(d)

order by d;

If you’ve ever had a KPI dashboard “lose” the first couple days of January (or double-count late December), this is usually why.

Quarter: quarter

Quarter sounds simple until you get asked “is Q1 based on calendar or fiscal year?” date_part(‘quarter‘, ts) is always calendar quarter (1–4).

select

date_part(‘quarter‘, timestamp ‘2026-01-15‘)::int as q1,

date_part(‘quarter‘, timestamp ‘2026-04-15‘)::int as q2,

date_part(‘quarter‘, timestamp ‘2026-07-15‘)::int as q3,

date_part(‘quarter‘, timestamp ‘2026-10-15‘)::int as q4;

For fiscal quarters, I shift the date first (or map months to fiscal quarters explicitly).

Epoch: ‘epoch‘

date_part(‘epoch‘, ts) returns seconds since 1970-01-01 00:00:00 UTC.

I use it for:

  • Converting to/from systems that store seconds.
  • Rough duration calculations.
  • Exporting consistent numeric time values.

Example:

select date_part(‘epoch‘, timestamptz ‘1970-01-01 00:00:05+00‘);

-- 5

Be careful with timestamptz: the value is stored in UTC internally, but displayed in the session timezone.

Time zone fields: timezone, timezonehour, timezoneminute

These extract the timezone offset parts for timestamptz. They’re helpful for debugging and audits (“Why did this timestamp shift?”), but I rarely use them in business logic. If your app’s behavior depends on timezone offsets, you’re often better off being explicit with AT TIME ZONE and storing canonical values.

datepart() vs extract() vs datetrunc() (Pick the Right Tool)

PostgreSQL gives you multiple ways to reach for “parts of time,” and I pick based on what I want the output to be.

  • date_part(‘month‘, ts) gives you a number (double precision).
  • extract(month from ts) is the SQL-standard spelling for the same idea.
  • date_trunc(‘month‘, ts) gives you a timestamp rounded down to the start of the month.

Here’s how I decide:

Task

Best Fit

Why —

— Group events by month bucket

date_trunc(‘month‘, ts)

Produces a canonical month start timestamp, great for joins and ordering Extract the month number for a label

date_part(‘month‘, ts)::int

Produces 1–12 Filter to “hour = 9” across dates

date_part(‘hour‘, ts) = 9 (plus indexing strategy)

Scalar extraction is straightforward Display a formatted date string

to_char(ts, ...)

Formatting is its job

And yes, I use extract() too—especially in teams that want SQL portability.

select

extract(year from occurred_at) as y,

extract(month from occurred_at) as m

from app_event;

Mentally, I treat extract() and date_part() as equivalent in intent: pick whichever makes your SQL more readable to your team.

Practical Query Patterns (Reporting, Cohorts, and “Time Buckets”)

This is the part that actually makes your queries feel professional: using date_part() for numeric extraction, but pairing it with sensible bucketing so results stay correct and easy to graph.

Pattern 1: Grouping by day-of-week (and ordering it correctly)

If you group by dow, you’ll get Sunday-first ordering by default. If you want Monday-first, use isodow.

select

datepart(‘isodow‘, occurredat)::int as weekday,

count(*) as events

from app_event

group by 1

order by 1;

If you need names, I usually map them explicitly (stable and readable):

select

datepart(‘isodow‘, occurredat)::int as weekday,

case datepart(‘isodow‘, occurredat)::int

when 1 then ‘Mon‘

when 2 then ‘Tue‘

when 3 then ‘Wed‘

when 4 then ‘Thu‘

when 5 then ‘Fri‘

when 6 then ‘Sat‘

when 7 then ‘Sun‘

end as weekday_name,

count(*) as events

from app_event

group by 1, 2

order by 1;

A small extra I like: if you’re visualizing, include a “weekend vs weekday” split.

select

(datepart(‘isodow‘, occurredat)::int in (6, 7)) as is_weekend,

count(*)

from app_event

group by 1

order by 1;

Pattern 2: “Cohort month” (signup month) as a joinable key

For cohort analysis, I prefer date_trunc(‘month‘, ...) as the cohort key, then I can still pull numeric month/year if I need it.

with signups as (

select

user_id,

min(occurredat) filter (where eventname = ‘signup‘) as signup_at

from app_event

group by user_id

)

select

datetrunc(‘month‘, signupat) as cohort_month,

count(*) as users

from signups

where signup_at is not null

group by 1

order by 1;

If your BI tool insists on numbers, layer date_part() on top:

select

datepart(‘year‘, cohortmonth)::int as cohort_year,

datepart(‘month‘, cohortmonth)::int as cohortmonthnum,

users

from (

with signups as (

select userid, min(occurredat) filter (where eventname=‘signup‘) as signupat

from app_event

group by user_id

)

select datetrunc(‘month‘, signupat) as cohort_month, count(*) as users

from signups

where signup_at is not null

group by 1

) x

order by 1, 2;

Pattern 3: Filtering by “business hour” (and keeping it readable)

I like to name the extracted piece in a CTE so the predicate is readable and you don’t repeat expressions:

with events_local as (

select

*,

datepart(‘hour‘, occurredat)::int as hourofday

from app_event

)

select *

from events_local

where hourofday between 9 and 17

order by occurred_at;

This also makes it easier to index later (more on that soon).

Pattern 4: Monthly rollups with date_part(‘epoch‘, ...) for rate calculations

If you want events per second (or per minute) over a window, epoch can help. Here’s a simple example for average events per hour in a given date range:

with bounds as (

select

timestamptz ‘2026-02-01 00:00:00-05‘ as start_ts,

timestamptz ‘2026-02-04 00:00:00-05‘ as end_ts

), stats as (

select

count(*) as event_count,

datepart(‘epoch‘, (b.endts - b.start_ts)) as seconds

from app_event e

cross join bounds b

where e.occurredat >= b.startts

and e.occurredat < b.endts

)

select

event_count,

seconds,

(eventcount / (seconds / 3600.0)) as eventsper_hour

from stats;

Notice the 3600.0: I force floating point math intentionally.

Time Zones and DST: Where “Correct” Gets Personal

If you only remember one thing: date_part() respects the type you pass.

  • timestamp without time zone has no timezone context. PostgreSQL will not “convert” it.
  • timestamptz (timestamp with time zone) represents an absolute instant. Display depends on the session timezone.

In teams, the biggest mistakes come from extracting “hour” and assuming everyone sees the same hour.

Make your timezone choice explicit

If your business logic is “hour in America/New_York,” be explicit:

select

occurred_at,

datepart(‘hour‘, occurredat at time zone ‘America/NewYork‘)::int as hourny

from app_event

order by occurred_at;

Why AT TIME ZONE here? Because it converts a timestamptz to a local timestamp (no timezone) in the requested zone, and then you extract parts from that local wall-clock time.

DST edge case: missing and repeated hours

During spring forward, a local hour can disappear; during fall back, an hour repeats. If you’re grouping by local hour during those transitions, counts can look “wrong” but the data is correct.

In my experience, the safest approach is:

  • Store timestamptz in UTC (or at least as timestamptz).
  • When extracting business-time parts, convert to the business timezone explicitly in the query.
  • For billing and compliance reports, pin the timezone in a view so you don’t rely on the session setting.

Debugging tip: show both UTC and local parts

When someone says “the data is off by one hour,” I run something like:

select

occurred_at,

datepart(‘hour‘, occurredat)::int as hoursessiontz,

datepart(‘hour‘, occurredat at time zone ‘UTC‘)::int as hour_utc,

datepart(‘hour‘, occurredat at time zone ‘America/LosAngeles‘)::int as hourla

from app_event

order by occurred_at;

This usually reveals whether the issue is a session timezone setting, a missing AT TIME ZONE, or an upstream bug storing local timestamps as if they were UTC.

date_part() with Intervals: Durations, SLAs, and “Time in State” Analytics

Intervals are where date_part() starts to feel like a power tool. Instead of extracting from an absolute date/time, you extract from a duration.

Example: measuring time from signup to purchase

Let’s extend the sample data a bit:

drop table if exists user_lifecycle;

create table user_lifecycle (

user_id bigint primary key,

signup_at timestamptz not null,

purchase_at timestamptz

);

insert into userlifecycle (userid, signupat, purchaseat) values

(101, ‘2026-01-31 23:50:00-05‘, null),

(202, ‘2026-02-01 08:00:00-05‘, ‘2026-02-02 12:15:30-05‘),

(303, ‘2026-02-01 09:00:00-05‘, ‘2026-02-03 09:10:11-05‘);

Now compute an interval and extract useful parts:

select

user_id,

purchaseat - signupat as timetopurchase,

datepart(‘day‘, purchaseat - signup_at)::int as days,

datepart(‘hour‘, purchaseat - signup_at)::int as hours,

datepart(‘minute‘, purchaseat - signup_at)::int as minutes

from user_lifecycle

where purchase_at is not null

order by user_id;

A warning: the day, hour, minute extracted from an interval are not “total hours” across the whole duration. They’re the field components of the interval.

If you want total seconds/minutes/hours, epoch is the safer choice:

select

user_id,

datepart(‘epoch‘, purchaseat - signupat) as totalseconds,

(datepart(‘epoch‘, purchaseat - signupat) / 3600.0) as totalhours

from user_lifecycle

where purchase_at is not null

order by user_id;

That’s the difference between:

  • “This duration is 1 day and 3 hours” (component fields)
  • “This duration is 27 total hours” (epoch-derived total)

In SLA work, I almost always want totals, not components.

Pattern: time-in-state analytics (support tickets, shipments, workflows)

A very common shape is an “entered state at” and “left state at” table. Here’s a minimal example:

drop table if exists ticket_state;

create table ticket_state (

ticket_id bigint not null,

state text not null,

entered_at timestamptz not null,

left_at timestamptz,

primary key (ticketid, state, enteredat)

);

insert into ticketstate (ticketid, state, enteredat, leftat) values

(1, ‘open‘, ‘2026-02-01 09:00:00+00‘, ‘2026-02-01 10:30:00+00‘),

(1, ‘pending‘, ‘2026-02-01 10:30:00+00‘, ‘2026-02-02 08:00:00+00‘),

(1, ‘resolved‘, ‘2026-02-02 08:00:00+00‘, null),

(2, ‘open‘, ‘2026-02-01 12:00:00+00‘, ‘2026-02-01 12:20:00+00‘);

Now compute per-state durations and summarize. I like epoch here because it gives totals cleanly.

select

state,

count(*) as rows,

avg(datepart(‘epoch‘, coalesce(leftat, now()) - enteredat)) / 60.0 as avgminutes

from ticket_state

group by 1

order by 1;

Two real-world notes:

1) leftat being NULL is common for current state. I use coalesce(leftat, now()) for “so far” metrics.

2) now() is stable within a statement, which is usually what you want for consistency.

A Quick Field Reference I Actually Memorize

There are many supported fields, but I find it most useful to memorize a core set and look up the rest only when needed. Here’s the set I personally reach for most often.

Calendar/date fields

  • ‘year‘, ‘month‘, ‘day‘
  • ‘quarter‘
  • ‘doy‘ (day of year)
  • ‘dow‘, ‘isodow‘
  • ‘week‘ (ISO week number)
  • ‘isoyear‘

Time-of-day fields

  • ‘hour‘, ‘minute‘, ‘second‘

Duration and conversion fields

  • ‘epoch‘

Debugging fields

  • ‘timezone‘, ‘timezonehour‘, ‘timezoneminute‘ (for timestamptz)

If you’re building a reporting layer, my practical advice is: standardize on a small subset and document it. Most “time bugs” are really “semantic disagreements.”

When I Use date_part() (and When I Avoid It)

date_part() is a great extraction function, but it’s easy to overuse.

I use date_part() when…

  • I need a numeric dimension to group/filter on (hour, isodow, month, doy).
  • I’m doing “shape of activity” analysis: peak hours, weekday patterns, seasonal curves.
  • I need total seconds from an interval using epoch.

I avoid date_part() when…

  • I really want a bucket boundary timestamp (I use date_trunc() instead).
  • I need index-friendly filtering over a time range (I prefer range predicates on the raw timestamp).
  • I’m generating labels for display (I use to_char()).

Here’s what I mean by “index-friendly.” Compare these two filters:

-- Often harder to optimize well for large tables

where datepart(‘year‘, occurredat) = 2026

-- Usually better: keep the column on the left and filter by a range

where occurred_at >= timestamptz ‘2026-01-01 00:00:00+00‘

and occurred_at < timestamptz '2027-01-01 00:00:00+00'

In production, that second pattern typically scales better because it can use a plain index on occurred_at directly.

Performance: Keeping date_part() Fast on Big Tables

This is the part people skip until a dashboard takes 40 seconds.

Principle 1: prefer range filters over extraction filters

If your goal is “only rows in February,” filtering with date_part(‘month‘, ...) = 2 reads nicely, but it forces PostgreSQL to compute a value for every candidate row (and it’s not selective enough in many datasets).

I prefer:

where occurred_at >= timestamptz ‘2026-02-01 00:00:00+00‘

and occurred_at < timestamptz '2026-03-01 00:00:00+00'

Then, inside the query, I’ll still use date_part() for grouping if I need “hour of day” or “isodow.”

Principle 2: if you must filter by a part, consider derived columns

Sometimes you truly need a predicate like “hour between 9 and 17” and you need it to be fast at scale.

A common production approach is to store a derived value you query frequently. For example:

alter table app_event

add column occurred_hour smallint;

update app_event

set occurredhour = datepart(‘hour‘, occurredat at time zone ‘America/NewYork‘)::int;

Then you can index it:

create index on appevent (occurredhour);

In real systems, I’d populate it on write (via application code or a trigger), or generate it during ingestion.

Why not always use an expression index on date_part(...)? Sometimes you can, but there are two practical complications:

  • If the expression depends on session settings (like timezone), it may not be eligible for indexing as-is.
  • Even when you can index the expression, you still need to be careful that every query uses the exact same expression, or the index won’t match.

Derived columns reduce “query discipline” requirements.

Principle 3: use date_trunc() for buckets you join on

For rollups, I like date_trunc() because it creates a canonical boundary that behaves nicely.

Example: hourly buckets (then date_part() is optional for labels).

select

datetrunc(‘hour‘, occurredat) as hour_bucket,

count(*)

from app_event

group by 1

order by 1;

If you need the hour number too:

select

hour_bucket,

datepart(‘hour‘, hourbucket)::int as hourofday,

count(*)

from (

select datetrunc(‘hour‘, occurredat) as hour_bucket

from app_event

) x

group by 1, 2

order by 1;

Principle 4: for time-series at scale, consider BRIN and partitioning

If your table is append-heavy and ordered by time (common for events), a BRIN index on occurred_at can be very effective for range scans.

Partitioning by month (or day) can also turn “scan everything” into “scan a few partitions.” If you go that route, date_part() still matters, but your first line of defense is keeping predicates partition-friendly (ranges on the timestamp).

A practical “before/after” mental model

I don’t like promising exact performance numbers because it depends on data size, distribution, and hardware. But I do see a consistent pattern:

  • Extraction in WHERE tends to scale poorly as row count grows.
  • Range predicates + grouping extraction tends to scale much better.
  • If you frequently filter on a time part, store it or index it carefully.

Common Pitfalls (and How I Avoid Them)

These are the mistakes I see over and over.

Pitfall 1: forgetting that the return type is floating point

If you group by date_part(‘hour‘, ...) without casting, you’ll still get clean integers most of the time, but you’re carrying a float when you don’t need to.

My habit is:

date_part(‘hour‘, ts)::int

Or, when I want to be explicit about “bucket integer”:

floor(date_part(‘hour‘, ts))::int

Pitfall 2: mixing calendar year with ISO week

If you report weekly metrics, decide early whether you mean ISO weeks or “weeks starting on Sunday” or “weeks starting on Monday in a business timezone.”

If you use date_part(‘week‘, ts) you’re getting ISO week numbers. If you group by ISO week number, also group by isoyear, not calendar year, or you’ll merge different years’ week 1 together.

select

datepart(‘isoyear‘, occurredat)::int as iso_year,

datepart(‘week‘, occurredat)::int as iso_week,

count(*)

from app_event

group by 1, 2

order by 1, 2;

Pitfall 3: extracting from timestamptz without pinning timezone

If you want “hour in New York,” always convert first.

datepart(‘hour‘, occurredat at time zone ‘America/New_York‘)

I also recommend explicitly setting the session timezone in ETL jobs and scheduled reports so results don’t drift if environment defaults change.

Pitfall 4: treating interval components as totals

This one is subtle and important. For an interval of, say, 1 day 3 hours, date_part(‘hour‘, interval) gives 3, not 27.

Totals: use epoch.

(datepart(‘epoch‘, intervalval) / 3600.0)

Pitfall 5: using date_part() for formatting

If you need “2026-02” as a label, to_char() is designed for that.

select tochar(occurredat at time zone ‘UTC‘, ‘YYYY-MM‘) as ym

from app_event;

You can still keep numeric fields via date_part() for sorting and grouping.

Practical Scenarios That Come Up a Lot

Here are a few “copy-paste and adapt” examples I’ve used in real systems.

Scenario 1: peak purchase hour by weekday

This is a classic: find peak hour distribution and split by weekday.

select

datepart(‘isodow‘, occurredat at time zone ‘America/New_York‘)::int as isodow,

datepart(‘hour‘, occurredat at time zone ‘America/New_York‘)::int as hour,

count(*) as purchases

from app_event

where event_name = ‘purchase‘

group by 1, 2

order by 1, 2;

If you want “peak hour per weekday,” I’ll rank within weekday:

with counts as (

select

datepart(‘isodow‘, occurredat at time zone ‘America/New_York‘)::int as isodow,

datepart(‘hour‘, occurredat at time zone ‘America/New_York‘)::int as hour,

count(*) as purchases

from app_event

where event_name = ‘purchase‘

group by 1, 2

)

select isodow, hour, purchases

from (

select

*,

row_number() over (partition by isodow order by purchases desc, hour asc) as rn

from counts

) x

where rn = 1

order by isodow;

Scenario 2: “business days only” checks

If you have a table that should only contain weekday activity (or should exclude weekends), isodow is a simple check.

select *

from app_event

where datepart(‘isodow‘, occurredat at time zone ‘America/New_York‘)::int in (6, 7)

order by occurred_at;

If this returns rows and it shouldn’t, you’ve got a data quality issue (or a misunderstanding of what the timestamp represents).

Scenario 3: generating complete time buckets (including zeros)

Dashboards often need “show 0 counts for missing hours.” I use generate_series() to build the bucket spine, then left join.

with buckets as (

select generate_series(

timestamptz ‘2026-02-01 00:00:00+00‘,

timestamptz ‘2026-02-02 00:00:00+00‘,

interval ‘1 hour‘

) as hour_bucket

), counts as (

select

datetrunc(‘hour‘, occurredat) as hour_bucket,

count(*) as events

from app_event

group by 1

)

select

b.hour_bucket,

coalesce(c.events, 0) as events

from buckets b

left join counts c using (hour_bucket)

order by 1;

Notice I used datetrunc() for the join key. datepart() is great for dimensions, but join keys want canonical timestamps.

Scenario 4: “rolling 7-day” by day-of-week (seasonality)

I’ll combine range filters with isodow grouping.

select

datepart(‘isodow‘, occurredat at time zone ‘America/New_York‘)::int as isodow,

count(*) as events

from app_event

where occurred_at >= (now() - interval ‘7 days‘)

group by 1

order by 1;

This gives a quick “what days are hot lately” picture.

A Small Checklist I Use Before Shipping a Time-Based Query

If I’m about to ship a query that drives a dashboard, I quickly sanity-check these:

1) Am I using the correct timestamp type (timestamptz vs timestamp)?

2) Is the timezone explicit where it needs to be?

3) If I use week reporting, am I consistent about ISO semantics (isoyear + week)?

4) Am I filtering by ranges (index-friendly) rather than extracting in WHERE?

5) Did I cast date_part() results to integers where appropriate?

6) For intervals, am I using component fields intentionally, or should I use epoch totals?

That list prevents 90% of the “why is this chart weird?” threads.

Closing Thoughts

I treat date_part() as a precision tool: it takes a rich timestamp or interval and turns it into a single numeric feature you can group and reason about. The trick isn’t memorizing every possible field—it’s knowing the semantic traps (especially dow vs isodow, ISO year/week edges, and timezone behavior) and shaping your queries so they stay correct and fast.

If you adopt one production habit from this: filter by time ranges on the raw timestamp, and use date_part() primarily for the grouping dimensions and derived metrics. That single choice usually buys you both correctness and performance.

Scroll to Top