Skip to content

[DTensor] Disallow redistribution to mixed partial types#172609

Closed
wconstab wants to merge 6 commits intogh/wconstab/503/basefrom
gh/wconstab/503/head
Closed

[DTensor] Disallow redistribution to mixed partial types#172609
wconstab wants to merge 6 commits intogh/wconstab/503/basefrom
gh/wconstab/503/head

Conversation

@wconstab
Copy link
Copy Markdown
Contributor

@wconstab wconstab commented Jan 16, 2026

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Jan 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/172609

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit c4a0c7c with merge base 4ac0624 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Comment on lines +836 to +839
partial_reduce_ops: set[str] = set()
for p in placements:
if p.is_partial():
partial_reduce_ops.add(p.reduce_op) # type: ignore[union-attr]
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if this make code simpler. We can compare Partial object directly.
partial_ops = set([p for p in placement if p.is_partial()])

Copy link
Copy Markdown
Member

@zpcore zpcore left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Curious, do we still plan to support mixed Partial in the future?

@wconstab
Copy link
Copy Markdown
Contributor Author

No, can't imagine a use case, so unless someone finds a compelling use case let's simplify our lives.

SergeyTyshkevich pushed a commit to SergeyTyshkevich/chart2 that referenced this pull request Jan 19, 2026
Comment on lines +867 to +871
# We do not see a valid use case for mixing different partial types in the same DTensor.
# in principle it could be supported, but since nonlinear reductions (e.g. max) exist, relative ordering
# of different partials would become semantically critical. Without a motivating use case, we prohibit this.
_assert_no_mixed_partial_types(current_spec.placements)
_assert_no_mixed_partial_types(target_spec.placements)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it true that there are several places where mixed partial could show up?

  1. distribute_tensor placements arg (not banned)
  2. DTensor.from_local placements arg (not banned)
  3. sharding propagation (banned here if needs_redistribute; o/w not banned although I can't think of a real case where it happens from any op)
  4. DTensor.redistribute (banned here)

From a completeness perspective, do you think it makes sense to make _assert_no_mixed_partial_types into utils file, and call it from case 1 and case 2?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea, I was planning to add (1) and (2) in a later PR and update docs accordingly, but if folks are in agreement about this I can just include it in this PR. And I agree about (3), it is not strictly banned but it seems practically impossible to happen.

@wconstab
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 20, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 2 jobs have failed, first few of them are: trunk / linux-jammy-rocm-py3.10 / test (distributed, 1, 3, linux.rocm.gpu.gfx942.4), trunk / linux-jammy-rocm-py3.10 / test (distributed, 2, 3, linux.rocm.gpu.gfx942.4)

Details for Dev Infra team Raised by workflow job

@wconstab
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge -i

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: Check Labels / Check labels

Details for Dev Infra team Raised by workflow job

@wconstab
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge -i

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

riccardofelluga pushed a commit to riccardofelluga/pytorch that referenced this pull request Jan 27, 2026
wconstab added a commit that referenced this pull request Jan 28, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 28, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: eb4a927
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Jan 29, 2026
…filters mixed partials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 29, 2026
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Jan 29, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 8cca0a6
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 2984751
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Feb 10, 2026
…filters mixed partials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
…filters mixed partials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 2483827
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Feb 10, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: e4407b4
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Feb 10, 2026
…filters mixed partials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 10, 2026
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 18, 2026
Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

ghstack-source-id: 38e8412
Pull Request resolved: #173614
wconstab added a commit that referenced this pull request Feb 18, 2026
…filters mixed partials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
wconstab added a commit that referenced this pull request Feb 18, 2026
…rtials"

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.

[ghstack-poisoned]
pytorchmergebot pushed a commit that referenced this pull request Feb 18, 2026
…3614)

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.
Pull Request resolved: #173614
Approved by: https://github.com/pianpwk, https://github.com/zpcore
norx1991 pushed a commit that referenced this pull request Feb 24, 2026
…3614)

Per #172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.
Pull Request resolved: #173614
Approved by: https://github.com/pianpwk, https://github.com/zpcore
EmanueleCoradin pushed a commit to EmanueleCoradin/pytorch that referenced this pull request Mar 30, 2026
…orch#173614)

Per pytorch#172609, we do not allow
mixed partial placements in one DTensor spec. It is therefore pointless
to generate strategies that encourage mixed partial types.

Before this PR, any OP strategy that has a rule for more than one
partial type and uses the expand util would have combined them
inappropriately.  After this PR, the expansion util filters out any such
combinations.
Pull Request resolved: pytorch#173614
Approved by: https://github.com/pianpwk, https://github.com/zpcore
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged release notes: distributed (dtensor) release notes category Reverted

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants