Skip to content

[DTensor] expand_to_full_mesh_op_strategy filters mixed partials#173614

Closed
wconstab wants to merge 6 commits intogh/wconstab/510/basefrom
gh/wconstab/510/head
Closed

[DTensor] expand_to_full_mesh_op_strategy filters mixed partials#173614
wconstab wants to merge 6 commits intogh/wconstab/510/basefrom
gh/wconstab/510/head

Conversation

@wconstab
Copy link
Copy Markdown
Contributor

@wconstab wconstab commented Jan 28, 2026

Stack from ghstack (oldest at bottom):

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately. After this PR, the expansion util filters out any such
combinations.

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 28, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 28, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/173614

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 Cancelled Jobs, 4 Unrelated Failures

As of commit 21537d4 with merge base 4c6817d (image):

CANCELLED JOBS - The following jobs were cancelled. Please retry:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but was present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot Bot added the release notes: distributed (dtensor) release notes category label Jan 28, 2026
wconstab added a commit that referenced this pull request Jan 28, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: eb4a927
Pull Request resolved: #173614
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 29, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 8cca0a6
Pull Request resolved: #173614
@wconstab wconstab requested review from pianpwk and zpcore January 29, 2026 17:12
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 2984751
Pull Request resolved: #173614
Comment thread torch/distributed/tensor/_ops/utils.py Outdated
for spec in spec_list:
if spec is not None:
partial_reduce_ops = {
p.reduce_op for p in spec.placements if isinstance(p, Partial)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe a weird case, but I would be wary of NormPartial() and P(sum) having the same reduce_op attribute

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well i thought we were deleting NormPartial soon? but if not i should harden this.

Copy link
Copy Markdown
Contributor Author

@wconstab wconstab Feb 18, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pianpwk i've updated to cover this.

…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 2483827
Pull Request resolved: #173614
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: e4407b4
Pull Request resolved: #173614
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 18, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 38e8412
Pull Request resolved: #173614
@wconstab
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge -i

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Feb 18, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 4 checks: inductor / inductor-cpu-build / build, inductor / unit-test / inductor-halide-build / build, inductor / unit-test / inductor-cpu-core-build (3.12) / build, inductor / inductor-test-cuda13 / test (inductor_torchbench, 2, 2, linux.g5.4xlarge.nvidia.gpu)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

norx1991 pushed a commit that referenced this pull request Feb 24, 2026
…3614)

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.
Pull Request resolved: #173614
Approved by: https://github.com/pianpwk, https://github.com/zpcore
@github-actions github-actions Bot deleted the gh/wconstab/510/head branch March 21, 2026 02:24
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
…orch#173614)

Per pytorch#172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.
Pull Request resolved: pytorch#173614
Approved by: https://github.com/pianpwk, https://github.com/zpcore
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants