>>> x = torch.randn(4, dtype=torch.complex64, requires_grad=True)
>>> y=torch.tanh(x)
>>> z=y.sum()
>>> z.backward()
[W python_engine.cpp:148] Warning: Complex backward is not fully supported yet and could lead to wrong gradients for functions we have not fixed yet (function operator())
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/chourdiaanjali/pytorch2/torch/tensor.py", line 184, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/chourdiaanjali/pytorch2/torch/autograd/__init__.py", line 115, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: "tanh_backward_cpu" not implemented for 'ComplexFloat' (operator() at aten/src/ATen/native/cpu/BinaryOpsKernel.cpp.AVX2.cpp:517)
Implement tanh_backward for complex dtypes on cpu and cuda
cc @ezyang @anjali411 @dylanbespalko