Skip to content

var and std don't support all-reductions for complex or half tensors on CPU #51127

@peterbell10

Description

@peterbell10

🐛 Bug

var and std use the legacy TH implementation for all-reductions.

// NOTE: CPU performance significantly regressed when attempting to port to ATen,
// so this dispatches differently based on device type.
// See https://github.com/pytorch/pytorch/pull/43858.
if (self.device().type() == kCPU) {
return at::_var(self, unbiased);
}

But this special case skips the complex handling code added in #27653, so all-reductions fail in cases where single-dimension reductions work.

To Reproduce

import torch
a = torch.rand(100, dtype=torch.complex64)
a.var()

raises

RuntimeError: _th_var not supported on CPUType for ComplexFloat

cc @ezyang @anjali411 @dylanbespalko @mruberry

Metadata

Metadata

Assignees

Labels

module: complexRelated to complex number support in PyTorchmodule: halfRelated to float16 half-precision floatsmodule: reductionstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions