Skip to content

Support negative index slicing with backed symints#177308

Closed
ColinPeppler wants to merge 12 commits intogh/ColinPeppler/4/basefrom
gh/ColinPeppler/4/head
Closed

Support negative index slicing with backed symints#177308
ColinPeppler wants to merge 12 commits intogh/ColinPeppler/4/basefrom
gh/ColinPeppler/4/head

Conversation

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Mar 12, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/177308

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 907a318 with merge base a345892 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot bot commented Mar 12, 2026

This PR needs a release notes: label

If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

ColinPeppler added a commit that referenced this pull request Mar 12, 2026
ghstack-source-id: b6111aa
Pull Request resolved: #177308
@ColinPeppler ColinPeppler added the topic: not user facing topic category label Mar 12, 2026
@ColinPeppler ColinPeppler requested a review from laithsakka March 12, 2026 21:21
shifts = torch.arange(0, 64, 8, device=x.device, dtype=torch.int64)
return (expanded >> shifts) & 255

torch.cuda.caching_allocator_enable(False)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto same as previous PR pls address before landing do try catch finally maybe?

if any(free_unbacked_symbols(x) for x in (start, end, dim_size)):
min_func = sympy.Min
max_func = sympy.Max
elif any(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mm this is only needed when backed_size_oblivious is on .

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
@ColinPeppler ColinPeppler added the ciflow/dtensor Run DTensor specific tests label Mar 19, 2026
@ColinPeppler
Copy link
Copy Markdown
Contributor Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 19, 2026
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot added a commit that referenced this pull request Mar 19, 2026
This reverts commit 9a7ae22.

Reverted #177308 on behalf of https://github.com/yangw-dev due to sorry the pr breaks internal test RiskExtrapolationModuleStabilityTest, please fix it and reland D97174518 ([comment](#175819 (comment)))
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@ColinPeppler your PR has been reverted as part of the stack under #175819.

@pytorchmergebot pytorchmergebot added Reverted ci-no-td Do not run TD on this PR labels Mar 19, 2026
ryanzhang22 pushed a commit to ryanzhang22/pytorch that referenced this pull request Mar 19, 2026
ryanzhang22 pushed a commit to ryanzhang22/pytorch that referenced this pull request Mar 19, 2026
…77308)"

This reverts commit 9a7ae22.

Reverted pytorch#177308 on behalf of https://github.com/yangw-dev due to sorry the pr breaks internal test RiskExtrapolationModuleStabilityTest, please fix it and reland D97174518 ([comment](pytorch#175819 (comment)))
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Starting merge as part of PR stack under #177418

1 similar comment
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Starting merge as part of PR stack under #177418

AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
AaronWang04 pushed a commit to AaronWang04/pytorch that referenced this pull request Mar 31, 2026
…77308)"

This reverts commit 9a7ae22.

Reverted pytorch#177308 on behalf of https://github.com/yangw-dev due to sorry the pr breaks internal test RiskExtrapolationModuleStabilityTest, please fix it and reland D97174518 ([comment](pytorch#175819 (comment)))

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
@ColinPeppler
Copy link
Copy Markdown
Contributor Author

@pytorchbot rebase -b main

@pytorchmergebot
Copy link
Copy Markdown
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here

[ghstack-poisoned]
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Successfully rebased gh/ColinPeppler/4/orig onto refs/remotes/origin/main, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/177308)

pytorchmergebot pushed a commit that referenced this pull request Apr 2, 2026
ghstack-source-id: 8f31f7c
Pull Request resolved: #177308

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Starting merge as part of PR stack under #177418


cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo

[ghstack-poisoned]
ColinPeppler added a commit that referenced this pull request Apr 6, 2026
ghstack-source-id: 130382e
Pull Request resolved: #177308
@pytorchmergebot
Copy link
Copy Markdown
Collaborator

Starting merge as part of PR stack under #177418

pytorchmergebot pushed a commit that referenced this pull request Apr 7, 2026
#177418)

### Why
- An IMA debugging aid to specifically disable CCA on a targeted block of code.
- Another option is `PYTORCH_NO_CUDA_MEMORY_CACHING=1` but that is set globally.

Usually I'd do this.
```
torch.cuda.caching_allocator_enable(False)
try:
    ...
finally: # make sure to clean up even on exception
    torch.cuda.caching_allocator_enable(True)
```

### What
Add a utility that
- Disables CUDA caching allocator (CCA) when entering the block.
- Restores the CCA state when exiting the block (even on exceptions).
```
with torch.cuda.caching_allocator_disabled():
    ...
```

Pull Request resolved: #177418
Approved by: https://github.com/eee4017, https://github.com/laithsakka
ghstack dependencies: #177308
nklshy-aws pushed a commit to nklshy-aws/pytorch that referenced this pull request Apr 7, 2026
nklshy-aws pushed a commit to nklshy-aws/pytorch that referenced this pull request Apr 7, 2026
pytorch#177418)

### Why
- An IMA debugging aid to specifically disable CCA on a targeted block of code.
- Another option is `PYTORCH_NO_CUDA_MEMORY_CACHING=1` but that is set globally.

Usually I'd do this.
```
torch.cuda.caching_allocator_enable(False)
try:
    ...
finally: # make sure to clean up even on exception
    torch.cuda.caching_allocator_enable(True)
```

### What
Add a utility that
- Disables CUDA caching allocator (CCA) when entering the block.
- Restores the CCA state when exiting the block (even on exceptions).
```
with torch.cuda.caching_allocator_disabled():
    ...
```

Pull Request resolved: pytorch#177418
Approved by: https://github.com/eee4017, https://github.com/laithsakka
ghstack dependencies: pytorch#177308
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/dtensor Run DTensor specific tests ciflow/inductor ciflow/torchtitan Run TorchTitan integration tests ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor Reverted topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants