Skip to content

Implement tanh_backward for complex dtypes #37701

@anjali411

Description

@anjali411

Implement tanh_backward for complex dtypes on cpu and cuda

>>> x = torch.randn(4, dtype=torch.complex64, requires_grad=True)
>>> y=torch.tanh(x)
>>> z=y.sum()
>>> z.backward()
[W python_engine.cpp:148] Warning: Complex backward is not fully supported yet and could lead to wrong gradients for functions we have not fixed yet (function operator())
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/chourdiaanjali/pytorch2/torch/tensor.py", line 184, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/home/chourdiaanjali/pytorch2/torch/autograd/__init__.py", line 115, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: "tanh_backward_cpu" not implemented for 'ComplexFloat' (operator() at aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:517)

cc @ezyang @anjali411 @dylanbespalko

Metadata

Metadata

Assignees

Labels

module: complexRelated to complex number support in PyTorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions