Skip to content

[coor-slicing] _select_split_tensor#169551

Closed
aorenste wants to merge 10 commits intogh/aorenste/158/basefrom
gh/aorenste/158/head
Closed

[coor-slicing] _select_split_tensor#169551
aorenste wants to merge 10 commits intogh/aorenste/158/basefrom
gh/aorenste/158/head

Conversation

@aorenste
Copy link
Copy Markdown
Contributor

@aorenste aorenste commented Dec 4, 2025

Placement._split_tensor() computes and returns too much information - in general most callers call it and then throw away most of the results. Added Placement._select_split_tensor() which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of Placement._split_tensor() and Shard._select_shard().

Stack from ghstack (oldest at bottom):

[ghstack-poisoned]
@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Dec 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/169551

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 127 Pending

As of commit 3307b34 with merge base dc48fef (image):
💚 Looks good so far! There are no failures yet. 💚

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

tiendatngcs pushed a commit to tiendatngcs/pytorch-Dec25 that referenced this pull request Dec 10, 2025
ghstack-source-id: 5a03dff
Pull Request resolved: pytorch/pytorch#169551
@aorenste aorenste added the topic: not user facing topic category label Dec 10, 2025
@aorenste aorenste changed the title WIP: _select_split_tensor [coor-slicing] _select_split_tensor Jan 6, 2026
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`. 




[ghstack-poisoned]
@aorenste aorenste marked this pull request as ready for review January 7, 2026 14:35
@aorenste aorenste requested a review from ezyang January 7, 2026 14:36
@ezyang ezyang requested review from dzmitry-huba and fduwjj January 8, 2026 15:13
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`. 




[ghstack-poisoned]
@aorenste
Copy link
Copy Markdown
Contributor Author

aorenste commented Jan 9, 2026

@pytorchbot merge

@pytorch-bot pytorch-bot Bot added the ciflow/trunk Trigger trunk jobs on your pull request label Jan 9, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`. 




[ghstack-poisoned]
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`. 




[ghstack-poisoned]
@aorenste
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@jeanschmidt
Copy link
Copy Markdown
Contributor

@pytorchbot revert -m "seems to be breaking internal signals, see D90448078" -c ghfirst

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Jan 12, 2026
This reverts commit c6583cb.

Reverted #169551 on behalf of https://github.com/jeanschmidt due to seems to be breaking internal signals, see D90448078 ([comment](#169551 (comment)))
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@aorenste your PR has been successfully reverted.

@pytorchmergebot pytorchmergebot added Reverted ci-no-td Do not run TD on this PR labels Jan 12, 2026
@jeanschmidt
Copy link
Copy Markdown
Contributor

@pytorchbot merge -f "should not have reverted"

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

hinriksnaer pushed a commit to hinriksnaer/pytorch that referenced this pull request Jan 12, 2026
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`.

Pull Request resolved: pytorch#169551
Approved by: https://github.com/ezyang
ghstack dependencies: pytorch#169549, pytorch#169550
hinriksnaer pushed a commit to hinriksnaer/pytorch that referenced this pull request Jan 12, 2026
This reverts commit c6583cb.

Reverted pytorch#169551 on behalf of https://github.com/jeanschmidt due to seems to be breaking internal signals, see D90448078 ([comment](pytorch#169551 (comment)))
hinriksnaer pushed a commit to hinriksnaer/pytorch that referenced this pull request Jan 12, 2026
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`.

Pull Request resolved: pytorch#169551
Approved by: https://github.com/ezyang
ghstack dependencies: pytorch#169549, pytorch#169550
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Jan 12, 2026
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`.

Pull Request resolved: pytorch#169551
Approved by: https://github.com/ezyang
ghstack dependencies: pytorch#169549, pytorch#169550
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Jan 12, 2026
This reverts commit c6583cb.

Reverted pytorch#169551 on behalf of https://github.com/jeanschmidt due to seems to be breaking internal signals, see D90448078 ([comment](pytorch#169551 (comment)))
skpark-rh pushed a commit to skpark-rh/pytorch that referenced this pull request Jan 12, 2026
`Placement._split_tensor()` computes and returns too much information - in general most callers call it and then throw away most of the results. Added `Placement._select_split_tensor()` which allows the caller to say which parts they want so we can compute only those bits - in essence it is the combination of `Placement._split_tensor()` and `Shard._select_shard()`.

Pull Request resolved: pytorch#169551
Approved by: https://github.com/ezyang
ghstack dependencies: pytorch#169549, pytorch#169550
SergeyTyshkevich pushed a commit to SergeyTyshkevich/chart2 that referenced this pull request Jan 19, 2026
ghstack-source-id: b1a095b
Pull Request resolved: pytorch/pytorch#169551
@github-actions github-actions Bot deleted the gh/aorenste/158/head branch February 12, 2026 02:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request Merged Reverted topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants