Skip to content

[complex] torch.abs: does not match numpy  #48486

@kshitij12345

Description

@kshitij12345
import torch
import numpy as np

print('torch.cfloat\n')
x = torch.tensor([1e20+1e10j], dtype=torch.cfloat)
print(torch.abs(x), np.abs(x.numpy())) # non-vectorized: correct value
x = torch.tensor([1e20+1e10j] * 10, dtype=torch.cfloat)
print(torch.abs(x)) # vectorized: incorrect value
print(np.abs(x.numpy()))

print('\ntorch.cdouble\n')
x = torch.tensor([1e200+0j], dtype=torch.cdouble)
print(torch.abs(x), np.abs(x.numpy())) # non-vectorized: correct value
x = torch.tensor([1e200+0j] * 10, dtype=torch.cdouble)
print(torch.abs(x)) # vectorized: incorrect value
print(np.abs(x.numpy()))

Output

torch.cfloat

tensor([1.0000e+20]) [1.e+20]
tensor([       inf,        inf,        inf,        inf,        inf,        inf,
               inf,        inf, 1.0000e+20, 1.0000e+20])
[1.e+20 1.e+20 1.e+20 1.e+20 1.e+20 1.e+20 1.e+20 1.e+20 1.e+20 1.e+20]

torch.cdouble

tensor([1.0000e+200], dtype=torch.float64) [1.e+200]
tensor([        inf,         inf,         inf,         inf,         inf,
                inf,         inf,         inf, 1.0000e+200, 1.0000e+200],
       dtype=torch.float64)
[1.e+200 1.e+200 1.e+200 1.e+200 1.e+200 1.e+200 1.e+200 1.e+200 1.e+200
 1.e+200]

cc @ezyang @anjali411 @dylanbespalko @mruberry @rgommers @heitorschueroff

Metadata

Metadata

Assignees

Labels

module: NaNs and InfsProblems related to NaN and Inf handling in floating pointmodule: complexRelated to complex number support in PyTorchmodule: numpyRelated to numpy support, and also numpy compatibility of our operatorstriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions