add batching rule for torch.Tensor.scatter_add_#150543
add batching rule for torch.Tensor.scatter_add_#150543guilhermeleobas wants to merge 4 commits intogh/guilhermeleobas/117/basefrom
torch.Tensor.scatter_add_#150543Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150543
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 3311987 with merge base 3da14d3 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| int64_t dim, | ||
| const Tensor& index, std::optional<int64_t> index_bdim, | ||
| const Tensor& src, std::optional<int64_t> src_bdim) { | ||
| auto self_ = self.clone(at::MemoryFormat::Preserve); |
There was a problem hiding this comment.
why do we need the clone now?
|
@pytorchbot merge -r |
|
@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here |
|
Successfully rebased |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Pull Request resolved: pytorch#150543 Approved by: https://github.com/zou3519
Pull Request resolved: pytorch#150543 Approved by: https://github.com/zou3519
The underlying bug (eager vs AOTDispatcher output mismatch for as_strided_scatter) was fixed by #150543. Remove the stale skip. Closes #85879 Pull Request resolved: #177203 Approved by: https://github.com/aorenste, https://github.com/zou3519
…77203) The underlying bug (eager vs AOTDispatcher output mismatch for as_strided_scatter) was fixed by pytorch#150543. Remove the stale skip. Closes pytorch#85879 Pull Request resolved: pytorch#177203 Approved by: https://github.com/aorenste, https://github.com/zou3519
Stack from ghstack (oldest at bottom):
torch.Tensor.scatter_add_#150543cc @zou3519 @Chillee @samdow @kshitij12345