Skip to content

Slicing with backed should produce backed output when possible #175819

Closed
ColinPeppler wants to merge 11 commits intogh/ColinPeppler/2/basefrom
gh/ColinPeppler/2/head
Closed

Slicing with backed should produce backed output when possible #175819
ColinPeppler wants to merge 11 commits intogh/ColinPeppler/2/basefrom
gh/ColinPeppler/2/head

Conversation

@ColinPeppler
Copy link
Copy Markdown
Contributor

@ColinPeppler ColinPeppler commented Feb 26, 2026

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Feb 26, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/175819

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit 72cb53f with merge base e05e600 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Feb 26, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ColinPeppler added a commit that referenced this pull request Feb 26, 2026
ghstack-source-id: fe64a32
Pull Request resolved: #175819
@ColinPeppler ColinPeppler added the topic: not user facing topic category label Feb 26, 2026
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
ColinPeppler added a commit that referenced this pull request Feb 26, 2026
ghstack-source-id: 0cb31ef
Pull Request resolved: #175819
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
ColinPeppler added a commit that referenced this pull request Feb 26, 2026
ghstack-source-id: f28607c
Pull Request resolved: #175819
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
ColinPeppler added a commit that referenced this pull request Feb 26, 2026
ghstack-source-id: 9406558
Pull Request resolved: #175819
return 0
elif guard_or_false(index > size):
return size
elif guard_or_false(index >= 0):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

elif guard_or_false(index < 0):
return sym_max(index + size, 0) # wrap then clamp to [0, size]

?

# Min/Max fallback: we can prove Min(a, b) <= b, but this type of Min/Max
# reasoning isn't handled in sympy yet. So, just evaluate the Min here.
for lhs, rhs in [(left, right), (right, left)]:
if isinstance(lhs, (sympy.Min, sympy.Max)) and rhs in lhs.args:
Copy link
Copy Markdown
Contributor

@laithsakka laithsakka Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for lhs, rhs in [(left, right), (right, left)]:
    if isinstance(lhs, sympy.Min) and rhs in lhs.args:
        return lhs  # Min(Min(a, b), b) = Min(a, b)
    if isinstance(lhs, sympy.Max) and rhs in lhs.args:
        return rhs  # Min(Max(a, b), b) = b

less dependence on sympy implicit optimization?
??

shifts = torch.arange(0, 64, 8, device=x.device, dtype=torch.int64)
return (expanded >> shifts) & 255

torch.cuda.caching_allocator_enable(False)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

context manager ?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no ctx mgr yet atm. i think there should be one, left as a follow-up

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean you shall use do
with ...
or try
catch
finally

if this test fail it will ruin other tests state

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point 👍

Copy link
Copy Markdown
Contributor

@laithsakka laithsakka left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left some comments

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
@pytorch-bot pytorch-bot bot added the ciflow/torchtitan Run TorchTitan integration tests label Mar 12, 2026
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
sandy-gags pushed a commit to sandy-gags/pytorch that referenced this pull request Mar 12, 2026
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
@ColinPeppler
Copy link
Copy Markdown
Contributor Author

Left the negative indexing handling in the next PR

@laithsakka laithsakka changed the title Slicing with backed should produce backed output Slicing with backed should produce backed output when possible Mar 12, 2026
pytorch-bot bot pushed a commit that referenced this pull request Mar 31, 2026
Summary:

* `x[0:s1]` where x.size(0) = `s0-1` should produce `Min(s1, s0-1)`
* Before this PR, it would produce `u0`.




cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

imported-using-ghimport

Test Plan: Imported from OSS

Differential Revision: D98937973

Pulled By: ColinPeppler
@facebook-github-tools
Copy link
Copy Markdown

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR!

Details for Dev Infra team Raised by workflow job

@facebook-github-tools
Copy link
Copy Markdown

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR!

Details for Dev Infra team Raised by workflow job

@facebook-github-tools
Copy link
Copy Markdown

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge failed

Reason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR!

Details for Dev Infra team Raised by workflow job

@facebook-github-tools
Copy link
Copy Markdown

@pytorchbot merge -i

(Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally)

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable)

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@izaitsevfb
Copy link
Copy Markdown
Contributor

@pytorchbot revert -c ghfirst -m 'reverted internally'

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Apr 1, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@ColinPeppler your PR has been successfully reverted.
⚠️ This PR might contain internal changes
cc: @pytorch/pytorch-dev-infra

pytorch-bot bot pushed a commit that referenced this pull request Apr 1, 2026
…le (#178899)

Summary:

Original PR: #175819
- it got reverted internally (D98767572 )
- i must reland as a new diff internal -> then export again (hence this diff)


### Summary
* `x[0:s1]` where x.size(0) = `s0-1` should produce `Min(s1, s0-1)`
* Before this PR, it would produce `u0`.

imported-using-ghimport

Test Plan: Imported from OSS

Reviewed By: sevenEng

Differential Revision: D98937973

Pulled By: ColinPeppler
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/dtensor Run DTensor specific tests ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor Reverted topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants