-
Notifications
You must be signed in to change notification settings - Fork 27.5k
batching rule for aten::scatter_add_ #148307
Copy link
Copy link
Closed
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: functorchPertaining to torch.func or pytorch/functorchPertaining to torch.func or pytorch/functorchmodule: vmaptriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Metadata
Metadata
Assignees
Labels
enhancementNot as big of a feature, but technically not a bug. Should be easy to fixNot as big of a feature, but technically not a bug. Should be easy to fixmodule: functorchPertaining to torch.func or pytorch/functorchPertaining to torch.func or pytorch/functorchmodule: vmaptriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
🚀 The feature, motivation and pitch
Hi Guys,
I'm a PhD student and working on a Pytorch project. Currently, I encountered the following warning.
It happens when I implement a
vmapon a function containingscatter_add_.In fact, I need operations on a very large tensor (maybe 10~40GB). So I have to use
vmapto save memory but remain efficient by tensor operations.This is a very common feature and similar operations to
scatteroperations may have existed.All in all, I hope this feature can be implemented with a priority.
Alternatives
Currently, I just ignore the warning.
Additional context
Thanks for the PyTorch team's hard work.
cc @zou3519 @Chillee @samdow @kshitij12345