-
Notifications
You must be signed in to change notification settings - Fork 27.4k
var and std don't support all-reductions for complex or half tensors on CPU #51127
Copy link
Copy link
Closed
Labels
module: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: halfRelated to float16 half-precision floatsRelated to float16 half-precision floatsmodule: reductionstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Bug
var and std use the legacy TH implementation for all-reductions.
pytorch/aten/src/ATen/native/ReduceOps.cpp
Lines 1151 to 1156 in 3562ca2
| // NOTE: CPU performance significantly regressed when attempting to port to ATen, | |
| // so this dispatches differently based on device type. | |
| // See https://github.com/pytorch/pytorch/pull/43858. | |
| if (self.device().type() == kCPU) { | |
| return at::_var(self, unbiased); | |
| } |
But this special case skips the complex handling code added in #27653, so all-reductions fail in cases where single-dimension reductions work.
To Reproduce
import torch
a = torch.rand(100, dtype=torch.complex64)
a.var()raises
RuntimeError: _th_var not supported on CPUType for ComplexFloat
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
module: complexRelated to complex number support in PyTorchRelated to complex number support in PyTorchmodule: halfRelated to float16 half-precision floatsRelated to float16 half-precision floatsmodule: reductionstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module