[Bugfix][Dynamo] Fix Sparse tensors by graph break in Dynamo#164873
[Bugfix][Dynamo] Fix Sparse tensors by graph break in Dynamo#164873Lucaskabela wants to merge 17 commits intomainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/164873
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit d9b2166 with merge base 14af1dc ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "topic: not user facing" |
7963d0e to
d778061
Compare
|
There are perhaps two ways of going about this;
I am partial to 1, since this is more defensive and clear for users; however we then preclude folks using dynamo/aot_eager for sparse tensor capture. However, I think this is fine since we already have a handful of graph breaks for sparse tensors and operations today (see pytorch/torch/_dynamo/variables/builtin.py Line 2336 in 724463d |
torch/_dynamo/variables/builder.py
Outdated
| not tx.export or not config.capture_sparse_compute | ||
| ): | ||
| unimplemented_v2( | ||
| gb_type="Attempted to wrap sparse Tensor", |
There was a problem hiding this comment.
| gb_type="Attempted to wrap sparse Tensor", | |
| gb_type="Attempted to wrap sparse Tensor with VariableTracker", |
torch/_dynamo/variables/builder.py
Outdated
| ): | ||
| unimplemented_v2( | ||
| gb_type="Attempted to wrap sparse Tensor", | ||
| context="", |
There was a problem hiding this comment.
| context="", | |
| context=str(example_value), |
|
If it works with |
|
Sure let me check - cc @laithsakka @eellison |
eellison
left a comment
There was a problem hiding this comment.
Should this be in FakeTensor instead ? E.g if an operator decomposed to use sparse tensors, or had sparsity in the backward. See
pytorch/torch/_subclasses/fake_impls.py
Line 1081 in 70925bd
|
@StrongerXi yes, there is this check to fallback for unsupported tensor pytorch/torch/_inductor/lowering.py Lines 2136 to 2139 in 70925bd |
Okay - let me see if we can move here as opposed to another graph break :) |
|
@pytorchmergebot merge -i "Test failure from cuda error not related to changes" |
|
❌ 🤖 pytorchbot command failed: Try |
|
@pytorchmergebot merge -i |
Merge startedYour change will be merged while ignoring the following 1 checks: inductor / inductor-test / test (inductor_torchbench, 1, 2, linux.g5.4xlarge.nvidia.gpu) Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / linux-jammy-cuda12.8-py3.10-gcc11 / test (default, 3, 5, lf.linux.g6.4xlarge.experimental.nvidia.gpu) Details for Dev Infra teamRaised by workflow job |
|
|
||
| for _ in range(3): | ||
| self.assertEqual(foo_opt(view_6, buf31), foo(view_6, buf31)) | ||
| with self.assertRaises(torch._dynamo.exc.BackendCompilerFailed): |
There was a problem hiding this comment.
This was previously not supported but just silently didn't cudagraph (see
Line 5829 in f368345
Letting users know loudly seems preferable
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
e702058 to
d9b2166
Compare
|
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…#164873) Fixes pytorch#164823 by making lack of support for sparse tensors very explicit (in fake tensor, inductor, and lowering code) Pull Request resolved: pytorch#164873 Approved by: https://github.com/williamwen42, https://github.com/eellison, https://github.com/mlazos
…#164873) Fixes pytorch#164823 by making lack of support for sparse tensors very explicit (in fake tensor, inductor, and lowering code) Pull Request resolved: pytorch#164873 Approved by: https://github.com/williamwen42, https://github.com/eellison, https://github.com/mlazos
Fixes #164823 by making lack of support for sparse tensors very explicit (in fake tensor, inductor, and lowering code)
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben