Skip to content

Returns float tensors for complex inputs to abs#35871

Closed
mruberry wants to merge 21 commits intomasterfrom
disables_complex_abs
Closed

Returns float tensors for complex inputs to abs#35871
mruberry wants to merge 21 commits intomasterfrom
disables_complex_abs

Conversation

@mruberry
Copy link
Copy Markdown
Collaborator

@mruberry mruberry commented Apr 2, 2020

Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's.

For complex inputs to abs the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While std::abs returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future.

Resolves #33567

@mruberry mruberry added module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators labels Apr 2, 2020
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 2, 2020

💊 Build failures summary and remediations

As of commit f188dd5 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 41 times.

@mruberry mruberry requested a review from apaszke as a code owner April 2, 2020 19:07
@mruberry
Copy link
Copy Markdown
Collaborator Author

mruberry commented Apr 2, 2020

To satisfy the clang-format check I had to format shape_analysis.cpp. The relevant change is:

// Requirements:
//   dims           : preserved
//   scalar type    : preserved, except complex maps to float
//   device         : preserved
//   tensor inputs  : 1
//   tensor outputs : 1
// Additionally:
//   - First input should be the only tensor input
static const register_formula_for simple_unary_ops_complex_to_float{
   {
        "aten::abs(Tensor self) -> Tensor",
   },
   [](Node* node) -> type_vec_t {
      if (auto type = node->input(0)->type()->cast<TensorType>()) {
       const auto scalar_type = *(type->scalarType());
       if (isComplexType(scalar_type)) {
         const auto out_type = c10::toValueType(scalar_type);
         return type_vec_t{
             type->withScalarType(out_type)->dimensionedOnly()};
       }
     }
     return type_vec_t{};
   }};

@mruberry mruberry requested review from anjali411 and gchanan and removed request for apaszke April 2, 2020 19:24
@anjali411 anjali411 requested a review from ezyang April 2, 2020 20:01
@ezyang
Copy link
Copy Markdown
Contributor

ezyang commented Apr 3, 2020

The clang format is a strict downgrade. Let me talk to the people who added the clang format.

@anjali411
Copy link
Copy Markdown
Contributor

looks good. Ideally we wouldn't want to be able to avoid the extra copy at the end but currently it seems cpu_kernel_vec also does a copy (iter.cast_outputs()) if the out dtype is not same as input dtype.
@mruberry I think we should also update

if dtype == torch.complex64:
and
# TODO: modify abs to return float/double for ComplexFloat/ComplexDouble
if diff.is_signed() and diff.dtype != torch.int8:
diff = diff.abs()
# if diff is complex, the imaginary component for diff will be 0
# from the previous step, hence converting it to float and double is fine.
if diff.dtype == torch.complex64:
diff = diff.to(torch.float)
elif diff.dtype == torch.complex128:
diff = diff.to(torch.double)
:D

Comment thread torch/testing/_internal/common_utils.py Outdated
Copy link
Copy Markdown
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this :)

Comment thread aten/src/ATen/native/UnaryOps.cpp
Comment thread aten/src/ATen/native/UnaryOps.cpp Outdated
Comment thread test/test_type_promotion.py
Comment thread aten/src/ATen/native/UnaryOps.cpp Outdated
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@mruberry merged this pull request in 3aeb2b1.

@mruberry
Copy link
Copy Markdown
Collaborator Author

This was reverted since it appears to break a production model.

@mruberry mruberry reopened this Apr 11, 2020
@mruberry
Copy link
Copy Markdown
Collaborator Author

Break in production model was due to an error in shape propagation. The shape propagation code is poorly tested by the OSS CI, unfortunately, and detecting errors in it via model breaks is our best defense. Following up with @houseroad about how to run the diff on more models to ensure the shape propagation logic is now correct.

Since we now know that shape propagation issues can pass CI and fail on actual models, the pattern @houseroad recommends should probably be mandatory for all shape prop changes.

@mruberry mruberry removed the merged label Apr 12, 2020
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
Summary:
Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's.

For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future.
Pull Request resolved: pytorch#35871

Differential Revision: D20961711

Pulled By: mruberry

fbshipit-source-id: 232f62cf64caa4154eb2194969efa51d2082d842
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Copy Markdown
Collaborator Author

Tested using @houseroad's run_all_model_execution_test.sh test. Thanks @houseroad!

@mruberry mruberry deleted the disables_complex_abs branch June 18, 2020 05:18
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's.

For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future.
Pull Request resolved: pytorch#35871

Differential Revision: D20961711

Pulled By: mruberry

fbshipit-source-id: 232f62cf64caa4154eb2194969efa51d2082d842
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's.

For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future.
Pull Request resolved: pytorch#35871

Differential Revision: D20984456

Pulled By: mruberry

fbshipit-source-id: 226445178f92f2b0292e92578656d98674a6aa20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Inconsistency between torch.abs() and np.absolute() for complex tensors

5 participants