Slicing with backed should produce backed output when possible #175819
Slicing with backed should produce backed output when possible #175819ColinPeppler wants to merge 11 commits intogh/ColinPeppler/2/basefrom
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/175819
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (2 Unrelated Failures)As of commit 72cb53f with merge base e05e600 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
| return 0 | ||
| elif guard_or_false(index > size): | ||
| return size | ||
| elif guard_or_false(index >= 0): |
There was a problem hiding this comment.
elif guard_or_false(index < 0):
return sym_max(index + size, 0) # wrap then clamp to [0, size]
?
torch/_inductor/sizevars.py
Outdated
| # Min/Max fallback: we can prove Min(a, b) <= b, but this type of Min/Max | ||
| # reasoning isn't handled in sympy yet. So, just evaluate the Min here. | ||
| for lhs, rhs in [(left, right), (right, left)]: | ||
| if isinstance(lhs, (sympy.Min, sympy.Max)) and rhs in lhs.args: |
There was a problem hiding this comment.
for lhs, rhs in [(left, right), (right, left)]:
if isinstance(lhs, sympy.Min) and rhs in lhs.args:
return lhs # Min(Min(a, b), b) = Min(a, b)
if isinstance(lhs, sympy.Max) and rhs in lhs.args:
return rhs # Min(Max(a, b), b) = b
less dependence on sympy implicit optimization?
??
| shifts = torch.arange(0, 64, 8, device=x.device, dtype=torch.int64) | ||
| return (expanded >> shifts) & 255 | ||
|
|
||
| torch.cuda.caching_allocator_enable(False) |
There was a problem hiding this comment.
no ctx mgr yet atm. i think there should be one, left as a follow-up
There was a problem hiding this comment.
I mean you shall use do
with ...
or try
catch
finally
if this test fail it will ruin other tests state
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
ghstack-source-id: a563cc6 Pull Request resolved: pytorch/pytorch#175819
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo [ghstack-poisoned]
|
Left the negative indexing handling in the next PR |
Summary: * `x[0:s1]` where x.size(0) = `s0-1` should produce `Min(s1, s0-1)` * Before this PR, it would produce `u0`. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy kadeng muchulee8 amjames chauhang aakhundov coconutruben jataylo imported-using-ghimport Test Plan: Imported from OSS Differential Revision: D98937973 Pulled By: ColinPeppler
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR! Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR! Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: This PR has internal changes and must be landed via Phabricator! Please try reimporting/rexporting the PR! Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -i (Initiating merge automatically since Phabricator Diff has merged, merging with -i because oss signals were bypassed internally) |
Merge startedYour change will be merged while ignoring the following 2 checks: inductor / unit-test / inductor-test / test (inductor, 1, 2, linux.g5.4xlarge.nvidia.gpu), inductor / inductor-cpu-test / test (cpu_inductor_torchbench, 1, 2, linux.2xlarge.amx, unstable) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
|
@pytorchbot revert -c ghfirst -m 'reverted internally' |
|
@pytorchbot successfully started a revert job. Check the current status here. |
#175819)" This reverts commit b0b4804. Reverted #175819 on behalf of https://github.com/izaitsevfb due to reverted internally ([comment](#175819 (comment)))
|
@ColinPeppler your PR has been successfully reverted. |
…le (#178899) Summary: Original PR: #175819 - it got reverted internally (D98767572 ) - i must reland as a new diff internal -> then export again (hence this diff) ### Summary * `x[0:s1]` where x.size(0) = `s0-1` should produce `Min(s1, s0-1)` * Before this PR, it would produce `u0`. imported-using-ghimport Test Plan: Imported from OSS Reviewed By: sevenEng Differential Revision: D98937973 Pulled By: ColinPeppler
x[0:s1]where x.size(0) =s0-1should produceMin(s1, s0-1)u0.Stack from ghstack (oldest at bottom):
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben @jataylo
Differential Revision: D98767572