Improve regex optimizer through investigation of regex optimizer passes#125289
Improve regex optimizer through investigation of regex optimizer passes#125289danmoseley wants to merge 6 commits intodotnet:mainfrom
Conversation
After FinalOptimize's EliminateEndingBacktracking and FindAndMakeLoopsAtomic create new tree structures (Atomic wrappers, restructured alternations), walk the tree bottom-up and re-call Reduce() on each node. This cleans up patterns like Concat(X, Empty), redundant Atomic(Oneloopatomic), and enables further prefix extraction — improving 221 of 18,931 real-world NuGet patterns (1.2%), all converging in a single extra round. Also moves UpdateBumpalong after ReReduceTree so it operates on the final tree structure. Fixes dotnet#66031 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR enhances System.Text.RegularExpressions’ post-parse optimization pipeline by adding a final “re-reduce” cleanup pass after FinalOptimize transformations, ensuring any newly introduced structures are simplified by running Reduce() again.
Changes:
- Add a
ReReduceTree()traversal at the end ofRegexNode.FinalOptimize()(afterEliminateEndingBacktrackingand beforeUpdateBumpalong). - Move
UpdateBumpalongto run after the re-reduction so it sees the final reshaped tree. - Add focused unit tests covering patterns that only fully optimize with the new re-reduce pass.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
src/libraries/System.Text.RegularExpressions/src/System/Text/RegularExpressions/RegexNode.cs |
Adds ReReduceTree() and updates FinalOptimize() pass ordering so reductions are re-applied after global rewrites. |
src/libraries/System.Text.RegularExpressions/tests/UnitTests/RegexReductionTests.cs |
Adds regression cases asserting the re-reduced trees match the expected simplified forms. |
- Row 3: Remove spurious capture group from before-fix equivalent (?>([ab]+c[ab]+|[ab]+)) -> (?>[ab]+c[ab]+|[ab]+) - Row 5: Add missing 'a' prefix in before-fix equivalent (?>(?>b?)|d) -> (?>a(?>b?)|d) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
src/libraries/System.Text.RegularExpressions/src/System/Text/RegularExpressions/RegexNode.cs
Outdated
Show resolved
Hide resolved
ReplaceChild already handles Reduce + re-parenting, so delegate to it instead of duplicating that logic. Also avoids a double-Reduce that occurred when the manual code passed the reduced node to ReplaceChild. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
src/libraries/System.Text.RegularExpressions/src/System/Text/RegularExpressions/RegexNode.cs
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Better name that pairs with FinalOptimize which calls it. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
8cda521 to
0281f05
Compare
src/libraries/System.Text.RegularExpressions/src/System/Text/RegularExpressions/RegexNode.cs
Outdated
Show resolved
Hide resolved
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
Thanks. Could you explore variations, in particular for overall parse time, like not doing any reduction (other than maybe removing Group) until the end, and then reducing until it stabilizes? |
|
I explored several variations of deferring reduction to the end. Here's a summary of the experiments and findings. Setup: 15,817 unique regex patterns from a real-world JSON corpus, Release build, Experiment 1: BaselinesMeasured the cost of
Experiment 2: Phase profiling (this PR)Instrumented each phase to understand where time is spent:
Experiment 3: Naive deferral --
|
| Variant | Per-pattern (μs) |
|---|---|
| This PR (reduce during parse + FinalReduce) | 4.08 +/- 0.07 |
| Deferred (ReduceMinimal during parse + 2x FinalReduce) | 4.29 +/- 0.07 |
| Difference | +0.21 μs (5%, t=8.2, p << 0.01) |
Deferred phase breakdown:
| Phase | Per-pattern (μs) | % of total |
|---|---|---|
Parse + ReduceMinimal |
2.3 | 54% |
Pre-optimization FinalReduce |
0.8 | 18% |
FindAndMakeLoopsAtomic |
0.5 | 12% |
EliminateEndingBacktracking |
0.2 | 4% |
Post-optimization FinalReduce |
0.5 | 12% |
| Total | 4.2 |
Deferring saves ~0.4 μs/pattern during parse but adds 0.8 μs for the pre-optimization FinalReduce tree walk.
Conclusions
The deferred approach is both more complex and slower:
- More code: Needs a new
AddChildMinimalmethod, changes to 13 parser call sites,FinalReducemoved outside the RTL/NonBacktracking guard, and two full tree-walk reduction passes instead of one. - Slower: 4.29 vs 4.08 μs/pattern (5% regression, statistically significant). The parse-time savings from skipping full
Reduceare more than offset by the additional tree walk. - Fundamental constraint: Reduction methods (
ReduceAlternation, etc.) create new nodes viaAddChildand depend on full reduction happening there. You can't simply removeReducefromAddChild-- you need a two-track system (minimal for parser, full for reducers).
The current PR approach -- reduce during parse + one FinalReduce at end -- appears to be the simplest correct approach. The integrated parse-time reduction is essentially free (no extra tree walk), and only one post-optimization FinalReduce pass is needed.
|
Doing some more investigations
|
Follow-up: Deep analysis of FinalReduce impact and cost distributionAfter the initial experiments, I ran three deeper investigations to understand FinalReduce's real-world behavior and whether the overhead can be reduced. Investigation 1: How many patterns actually benefit from FinalReduce?Tested all 15,810 parseable patterns from the corpus. FinalReduce produces a different tree for only 88 patterns (0.6%). Those 88 patterns had 104 total nodes changed (avg 1.2 per pattern). The remaining 99.4% get a full tree walk that finds nothing to change. Investigation 2: Can we skip FinalReduce when it's not needed?I prototyped a dirty-flag approach: set a Results:
The early-exit saves ~0.3 us/pattern, cutting overhead from +16% to +8%. However, it requires maintaining dirty-flag annotations across 6 code locations in two complex methods, and the conditions are subtle (e.g., only flag when parent is Investigation 3: Per-pattern FinalReduce cost distributionMeasured FinalReduce cost individually for each of the 15,810 patterns:
Distribution buckets:
The top 1% of patterns (158) account for 15.3% of total FinalReduce cost. The expensive tail is dominated by large alternation patterns -- timezone name lists, mobile UA detection strings, SQL keyword lists, and cron expressions. These patterns have deep trees with many nodes to walk. Conclusions
|
Fixes #66031
Human/copilot collaboration to understand whether the regex optimizer passes can be improved, using the real world patterns corpus as a test bed.
Investigation
We analyzed the issue by running experiments against 18,931 real-world regex patterns extracted from NuGet packages (dotnet/runtime-assets corpus):
Fixed-point convergence: Re-ran
Reduce()in a loop until the tree stabilized. 221 of 18,931 patterns (1.2%) benefit from a second round. All converge in exactly 2 rounds -- zero oscillation, zero regressions.Pass ordering sensitivity: Tried all permutations of the FinalOptimize passes. 0 patterns where ordering matters (beyond the FinalReduce placement).
Minimal fix analysis: Compared re-running just
Reduce()vs. re-running the full FinalOptimize passes. Re-reduce alone captures 100% of the improvements. Re-runningFindAndMakeLoopsAtomic+EliminateEndingBacktrackingadds nothing.Problem
The regex optimizer runs in two phases: per-node
Reduce()calls during parsing, then three globalFinalOptimize()passes after parsing (FindAndMakeLoopsAtomic,EliminateEndingBacktracking,UpdateBumpalong). The global passes create new tree structures (Atomic wrappers, restructured alternations) that are themselves eligible for further reduction -- butReduce()never re-runs on them. This leaves optimization opportunities on the table.Change
Add a single
FinalReduce()call at the end ofFinalOptimize(), afterEliminateEndingBacktrackingand beforeUpdateBumpalong. It walks the tree bottom-up and re-callsReduce()on each node, replacing any node that simplifies. This is a 15-line private method with aStackHelper.TryEnsureSufficientExecutionStack()guard.UpdateBumpalongwas also moved to run afterFinalReduce, sinceFinalReducecan restructure alternations into concatenations with a leading loop thatUpdateBumpalongneeds to see.Alternatives rejected
Distinct improvement categories in real world corpus
Analysis of all 231 changed pattern trees (221 unique patterns, some with multiple option variants) across 35 distinct structural signatures identified ~4 distinct improvement variants:
a|aba(?:)a\n|\n\r|\r\n(?>\n(?:)|\r\n)(?>\n|\r\n)[ab]+c[ab]+|[ab]+(?>[ab]+c[ab]+|[ab]+)(?>(?>[ab]+)(?:c(?>[ab]+))?)ab|a|aca(?>b?)ab?ab|a|ac|d(?>a(?>b?)|d)(?>ab?|d)a?b|a??b(?>a?(?>[b]))(?>a?(?>b))[b]simplified tob[ab]?c|[ab]??c(?>[ab]?(?>[c]))(?>[ab]?(?>c))These are captured in new reduction tests. All 7 tests are verified to fail without the
FinalReducechange and pass with it.Performance
PatternsReduceIdenticallytests continue to pass unchanged.