Skip to content

sp_QuickieStore: @find_high_impact performance overhaul (60s → 10s)#718

Merged
erikdarlingdata merged 18 commits intomainfrom
dev
Mar 24, 2026
Merged

sp_QuickieStore: @find_high_impact performance overhaul (60s → 10s)#718
erikdarlingdata merged 18 commits intomainfrom
dev

Conversation

@erikdarlingdata
Copy link
Copy Markdown
Owner

Summary

Comprehensive performance overhaul of @find_high_impact, reducing total execution time from ~60s to ~10s on large Query Stores.

Query decomposition

  • Split 4-way DMV join into staged steps: interval IDs → runtime stats (EXISTS) → plan rollup → query hash rollup
  • Staged query_ids reused by representative text, time bucketing, and identifiers — each touches minimal DMVs
  • Static SQL INSERT for scoring/diagnostics, separate OUTER APPLY for plans

Join/filter improvements

  • EXISTS semi-join pattern for interval filtering and plan fetch
  • Plan OUTER APPLY uses #hi_id_staging_plans with PK seek on query_store_plan
  • Removed redundant maintenance NOT EXISTS subqueries (~12s)
  • Removed query_plan IS NOT NULL filter (~5s)
  • ROW_NUMBER pattern replaces TOP(1) in correlated plan subquery

Optimizer hints

  • Clustered PKs on 7 temp tables (plan_id, query_hash, interval IDs)
  • Hash join hint on interval EXISTS

Output fixes

  • Step ordering fix (staging after #hi_interesting, not before)
  • @sort_order tiebreaker for impact_score ties

PRs included

#709, #710, #711, #712, #713, #714, #715, #716, #717

Test plan

  • SQL2022 — StackOverflow2013, hammerdb_tpch, hammerdb_tpcc
  • SQL2016 — StackOverflow2010

🤖 Generated with Claude Code

erikdarlingdata and others added 18 commits March 24, 2026 09:33
Representative text (ranked_hash, ~23s):
- Moved query_id staging to run immediately after #hi_interesting
- Representative text query now starts from #hi_id_staging_queries
  (pre-filtered interesting hashes) instead of scanning all runtime
  stats rows (1.7M → ~10K)

Final SELECT plan fetch (~29s):
- Replaced TOP (1) ... ORDER BY with ROW_NUMBER() ... WHERE n = 1
  in the OUTER APPLY to give the optimizer better options for the
  correlated subquery

Tested on SQL2022 (SO2013, TPC-H) and SQL2016 (SO2010).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…-plan

sp_QuickieStore: optimize representative text and plan fetch in @find_high_impact
…un after #hi_interesting

Steps 2b/2c (query_id staging + representative text) were placed before
Step 2 (top N → #hi_interesting), so #hi_id_staging_queries joined an
empty #hi_interesting table. This caused NULL primary_window,
query_id_list, and empty query_sql_text.

Moved Steps 3b/3c to after Step 3 (scoring) where #hi_interesting is
populated.

Tested on SQL2022 and SQL2016 — all columns populated correctly.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sp_QuickieStore: fix @find_high_impact step ordering
- Remove maintenance NOT EXISTS subquery from Step 1 aggregate and
  Step 3c representative text (~12s savings). These hashes are already
  filtered out by the main procedure's maintenance exclusion before
  reaching @find_high_impact.
- Remove query_plan IS NOT NULL from the final OUTER APPLY (~5s savings).
  TRY_CONVERT handles NULL plans gracefully in the output.

Tested on SQL2022.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…filters

sp_QuickieStore: remove redundant filters from @find_high_impact
The OUTER APPLY correlated on query_hash through query_store_query
for each output row. Now uses #hi_id_staging_queries (already populated)
to skip query_store_query entirely — 1 DMV instead of 2, correlating
against a known-small temp table.

Tested on SQL2022.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…aging

sp_QuickieStore: use staged query_ids in final plan OUTER APPLY
The OUTER APPLY was joining #hi_id_staging_queries → query_store_plan
by query_id. Now uses #hi_id_staging_plans which has the plan_ids
directly, allowing a PK seek on query_store_plan instead of a
query_id join.

Tested on SQL2022.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sp_QuickieStore: use plan_id PK seek in final OUTER APPLY
The JOIN pattern in the OUTER APPLY wasn't getting a good plan.
EXISTS with the semi-join against #hi_id_staging_plans lets the
optimizer pick a better strategy for the correlated subquery.

Tested on SQL2022.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sp_QuickieStore: use EXISTS in final plan OUTER APPLY
The initial aggregate query joined all 4 Query Store DMVs together
(qsq → qsp → qsrs → qsrsi), causing the optimizer to pick bad plans
with TVFs taking 8+ seconds.

Split into two stages following the pattern used elsewhere in the proc:
- Step 1a: qsrs + qsrsi → #hi_plan_stats (aggregate to plan_id level)
- Step 1b: #hi_plan_stats + qsp + qsq → #hi_query_stats (roll up to
  query_hash, driving from a small pre-aggregated temp table)

Tested on SQL2022 (StackOverflow2013) and SQL2016 (StackOverflow2010).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…gate

sp_QuickieStore: split Step 1 aggregate to avoid 4-way DMV join
Step 1a now stages matching runtime_stats_interval IDs into
#hi_intervals (tiny result set from 1 DMV), then Step 1b aggregates
runtime_stats with EXISTS against the temp table instead of joining
the two DMVs directly.

Tested on SQL2022 and SQL2016.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sp_QuickieStore: stage interval IDs for @find_high_impact runtime stats
Adds clustered primary keys to 7 temp tables to give the optimizer
better cardinality and join strategy information:
- #hi_plan_stats (plan_id)
- #hi_query_stats (query_hash)
- #hi_interesting (query_hash)
- #hi_primary_window (query_hash)
- #hi_query_waits (query_hash)
- #hi_query_identifiers (query_hash)
- #hi_output (query_hash)

Tables without PKs are those with composite keys or multiple rows per
hash (staging tables, time_buckets, wait_staging, representative_text).

Tested on SQL2022 and SQL2016.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
sp_QuickieStore: add clustered PKs to @find_high_impact temp tables
@erikdarlingdata erikdarlingdata merged commit 8bb1e20 into main Mar 24, 2026
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant