As the title says, PyTorch's min and max are incompatible with NumPy's. For example:
a = np.array((0 + 4j, 4 + 0j, -2 - 2j, 1 + 1j, 2 + 2j, 3 + 3j))
t = torch.from_numpy(a)
np_result = np.min(a)
torch_result = torch.min(t)
np_result is (-2 - 2j) while torch_result is (1. + 1.j). The latter seems hard to justify under any notion of min.
Max is no better:
np_result = np.max(a)
torch_result = torch.max(t)
np_result is (4 + 0j) while torch_result is (-inf + 0.j). Again, the torch_result seems hard to justify under any notion of max.
When a dim is supplied to max the PyTorch behavior changes, but is still incompatible with NumPy's:
torch_result = torch.max(t, dim=0)
Here torch_result is (3. + 3.j), which is at least a reasonable max value.
cc @ezyang @anjali411 @dylanbespalko
As the title says, PyTorch's min and max are incompatible with NumPy's. For example:
np_result is (-2 - 2j) while torch_result is (1. + 1.j). The latter seems hard to justify under any notion of min.
Max is no better:
np_result is (4 + 0j) while torch_result is (-inf + 0.j). Again, the torch_result seems hard to justify under any notion of max.
When a dim is supplied to max the PyTorch behavior changes, but is still incompatible with NumPy's:
Here torch_result is (3. + 3.j), which is at least a reasonable max value.
cc @ezyang @anjali411 @dylanbespalko