[DTensor] Add single-dim registration infra#170359
Closed
wconstab wants to merge 8 commits intogh/wconstab/475/basefrom
Closed
[DTensor] Add single-dim registration infra#170359wconstab wants to merge 8 commits intogh/wconstab/475/basefrom
wconstab wants to merge 8 commits intogh/wconstab/475/basefrom
Conversation
[ghstack-poisoned]
This was referenced Dec 13, 2025
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/170359
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 7212664 with merge base 1984725 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was referenced Dec 13, 2025
Closed
wconstab
commented
Dec 13, 2025
[ghstack-poisoned]
[ghstack-poisoned]
[ghstack-poisoned]
weifengpy
approved these changes
Dec 18, 2025
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost [ghstack-poisoned]
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost [ghstack-poisoned]
Contributor
Author
|
@pytorchbot merge |
Collaborator
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
majing921201
pushed a commit
to majing921201/pytorch
that referenced
this pull request
Dec 19, 2025
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost Pull Request resolved: pytorch#170359 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#170615, pytorch#167677 Co-authored-by: Pian Pawakapan <pianpwk@meta.com>
pytorchmergebot
pushed a commit
that referenced
this pull request
Dec 19, 2025
Enforce tensor_meta is not none for new single-dim rules. Allow tensor_meta to continue to be None for existing rules for now. We should consider in the future asserting tensor_meta is required in DTensorSpec, but for now we just try to limit the bleeding. Pull Request resolved: #170827 Approved by: https://github.com/dolpm ghstack dependencies: #170615, #167677, #170359
xgz2
pushed a commit
that referenced
this pull request
Dec 22, 2025
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost Pull Request resolved: #170359 Approved by: https://github.com/weifengpy Co-authored-by: Pian Pawakapan <pianpwk@meta.com>
xgz2
pushed a commit
that referenced
this pull request
Dec 22, 2025
This reverts commit 32d0782. Reverted #170359 on behalf of https://github.com/jeanschmidt due to Required to revert #167677 that is required to revert #170615 that is required to revert #170030 ([comment](#170359 (comment)))
xgz2
pushed a commit
that referenced
this pull request
Dec 22, 2025
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost Pull Request resolved: #170359 Approved by: https://github.com/weifengpy ghstack dependencies: #170615, #167677 Co-authored-by: Pian Pawakapan <pianpwk@meta.com>
xgz2
pushed a commit
that referenced
this pull request
Dec 22, 2025
Enforce tensor_meta is not none for new single-dim rules. Allow tensor_meta to continue to be None for existing rules for now. We should consider in the future asserting tensor_meta is required in DTensorSpec, but for now we just try to limit the bleeding. Pull Request resolved: #170827 Approved by: https://github.com/dolpm ghstack dependencies: #170615, #167677, #170359
krastogi-in
pushed a commit
to krastogi-in/pytorch
that referenced
this pull request
Jan 9, 2026
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost Pull Request resolved: pytorch#170359 Approved by: https://github.com/weifengpy Co-authored-by: Pian Pawakapan <pianpwk@meta.com>
krastogi-in
pushed a commit
to krastogi-in/pytorch
that referenced
this pull request
Jan 9, 2026
This reverts commit 32d0782. Reverted pytorch#170359 on behalf of https://github.com/jeanschmidt due to Required to revert pytorch#167677 that is required to revert pytorch#170615 that is required to revert pytorch#170030 ([comment](pytorch#170359 (comment)))
krastogi-in
pushed a commit
to krastogi-in/pytorch
that referenced
this pull request
Jan 9, 2026
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration. Notes: * I didn't yet decide how multiple registrations should be handled. I was planning to make it an error if you register twice for the same op for either single_dim or regular strategies. * I took the cleanest path of integration for now in sharding_prop, reusing as much code as possible with the existing 'op_strategy' case. I may have to change this later when integrating find_min_cost Pull Request resolved: pytorch#170359 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#170615, pytorch#167677 Co-authored-by: Pian Pawakapan <pianpwk@meta.com>
krastogi-in
pushed a commit
to krastogi-in/pytorch
that referenced
this pull request
Jan 9, 2026
Enforce tensor_meta is not none for new single-dim rules. Allow tensor_meta to continue to be None for existing rules for now. We should consider in the future asserting tensor_meta is required in DTensorSpec, but for now we just try to limit the bleeding. Pull Request resolved: pytorch#170827 Approved by: https://github.com/dolpm ghstack dependencies: pytorch#170615, pytorch#167677, pytorch#170359
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 13, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
pytorchmergebot
pushed a commit
that referenced
this pull request
Jan 14, 2026
…2150) gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 Pull Request resolved: #172150 Approved by: https://github.com/wconstab
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
… for matmul single dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
weifengpy
added a commit
that referenced
this pull request
Jan 14, 2026
…le dim" gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy #170359 [ghstack-poisoned]
mattteochen
pushed a commit
to mattteochen/pytorch
that referenced
this pull request
Jan 15, 2026
…orch#172150) gen_einsum_strategies inserts replicate strategy first: https://github.com/pytorch/pytorch/blob/74b6a0efa359722def4b585d9d91fbc3a4bfa530/torch/distributed/tensor/_ops/_einsum_strategy.py#L121-L122 _select_min_cost_strategy choose Replicate at equal cost This PR makes sure consistent matmul results after switching to single dim strategy pytorch#170359 Pull Request resolved: pytorch#172150 Approved by: https://github.com/wconstab
SergeyTyshkevich
pushed a commit
to SergeyTyshkevich/chart2
that referenced
this pull request
Jan 19, 2026
ghstack-source-id: abba53d Pull Request resolved: pytorch/pytorch#170359
SergeyTyshkevich
pushed a commit
to SergeyTyshkevich/chart2
that referenced
this pull request
Jan 19, 2026
ghstack-source-id: 6fac59a Pull Request resolved: pytorch/pytorch#170359
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):
This PR adds the register_single_dim_strategy util, and hooks it up to sharding_propagator. It also tests the registration.
Notes: