[inductor] Make addcmul/addcdiv decomp skip unconditional and add another decomp#175839
Closed
mlazos wants to merge 16 commits intogh/mlazos/110/basefrom
Closed
[inductor] Make addcmul/addcdiv decomp skip unconditional and add another decomp#175839mlazos wants to merge 16 commits intogh/mlazos/110/basefrom
mlazos wants to merge 16 commits intogh/mlazos/110/basefrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/175839
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit c721804 with merge base a6beff3 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
This was referenced Feb 26, 2026
mlazos
added a commit
that referenced
this pull request
Feb 26, 2026
The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. ghstack-source-id: facc355 Pull-Request: #175839
mlazos
added a commit
that referenced
this pull request
Feb 27, 2026
The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. ghstack-source-id: debdcbf Pull-Request: #175839
v0i0
approved these changes
Feb 27, 2026
mlazos
added a commit
that referenced
this pull request
Mar 2, 2026
The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. ghstack-source-id: 34ddc76 Pull-Request: #175839
Collaborator
|
Starting merge as part of PR stack under #174911 |
pytorchmergebot
pushed a commit
that referenced
this pull request
Mar 3, 2026
Add CompiledOptimizerBitwiseTests test suite that verifies compiled optimizers produce bitwise identical results to eager when precision configs are enabled: - eager_numerics.division_rounding = True - eager_numerics.pow_precision = True - emulate_precision_casts = True Tests cover Adam and AdamW with various configurations including amsgrad, maximize, and weight_decay options. Pull Request resolved: #174911 Approved by: https://github.com/v0i0 ghstack dependencies: #176237, #175839
postmath
pushed a commit
to postmath/pytorch
that referenced
this pull request
Mar 3, 2026
…ther decomp (pytorch#175839) The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. Pull Request resolved: pytorch#175839 Approved by: https://github.com/v0i0 ghstack dependencies: pytorch#176237
postmath
pushed a commit
to postmath/pytorch
that referenced
this pull request
Mar 3, 2026
Add CompiledOptimizerBitwiseTests test suite that verifies compiled optimizers produce bitwise identical results to eager when precision configs are enabled: - eager_numerics.division_rounding = True - eager_numerics.pow_precision = True - emulate_precision_casts = True Tests cover Adam and AdamW with various configurations including amsgrad, maximize, and weight_decay options. Pull Request resolved: pytorch#174911 Approved by: https://github.com/v0i0 ghstack dependencies: pytorch#176237, pytorch#175839
sandy-gags
pushed a commit
to sandy-gags/pytorch
that referenced
this pull request
Mar 12, 2026
The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. ghstack-source-id: 8ac9046 Pull-Request: pytorch/pytorch#175839
EmanueleCoradin
pushed a commit
to EmanueleCoradin/pytorch
that referenced
this pull request
Mar 30, 2026
…ther decomp (pytorch#175839) The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by emulate_precision_casts), but the decomposition skip in select_decomp_table() was still gated by that config. This meant the decompositions would override the FMA lowerings when emulate_precision_casts=False. Make the decomp skip unconditional to match the lowerings. Also add aten.addcmul_ (in-place) to the skip list. Authored with Claude. Pull Request resolved: pytorch#175839 Approved by: https://github.com/v0i0 ghstack dependencies: pytorch#176237
EmanueleCoradin
pushed a commit
to EmanueleCoradin/pytorch
that referenced
this pull request
Mar 30, 2026
Add CompiledOptimizerBitwiseTests test suite that verifies compiled optimizers produce bitwise identical results to eager when precision configs are enabled: - eager_numerics.division_rounding = True - eager_numerics.pow_precision = True - emulate_precision_casts = True Tests cover Adam and AdamW with various configurations including amsgrad, maximize, and weight_decay options. Pull Request resolved: pytorch#174911 Approved by: https://github.com/v0i0 ghstack dependencies: pytorch#176237, pytorch#175839
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):
The FMA lowerings for addcmul/addcdiv are now unconditional (not gated by
emulate_precision_casts), but the decomposition skip in select_decomp_table()
was still gated by that config. This meant the decompositions would override
the FMA lowerings when emulate_precision_casts=False.
Make the decomp skip unconditional to match the lowerings. Also add
aten.addcmul_ (in-place) to the skip list.
Authored with Claude.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @jataylo