Returns float tensors for complex inputs to abs#35871
Returns float tensors for complex inputs to abs#35871
Conversation
💊 Build failures summary and remediationsAs of commit f188dd5 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker. This comment has been revised 41 times. |
|
To satisfy the clang-format check I had to format shape_analysis.cpp. The relevant change is: |
|
The clang format is a strict downgrade. Let me talk to the people who added the clang format. |
|
looks good. Ideally we wouldn't want to be able to avoid the extra copy at the end but currently it seems cpu_kernel_vec also does a copy ( pytorch/test/test_type_promotion.py Line 213 in 8a6173e pytorch/torch/testing/_internal/common_utils.py Lines 850 to 858 in 7ee88d6 |
…bles_complex_abs
…orch into disables_complex_abs
anjali411
left a comment
There was a problem hiding this comment.
Thanks for working on this :)
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
This was reverted since it appears to break a production model. |
|
Break in production model was due to an error in shape propagation. The shape propagation code is poorly tested by the OSS CI, unfortunately, and detecting errors in it via model breaks is our best defense. Following up with @houseroad about how to run the diff on more models to ensure the shape propagation logic is now correct. Since we now know that shape propagation issues can pass CI and fail on actual models, the pattern @houseroad recommends should probably be mandatory for all shape prop changes. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's. For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future. Pull Request resolved: pytorch#35871 Differential Revision: D20961711 Pulled By: mruberry fbshipit-source-id: 232f62cf64caa4154eb2194969efa51d2082d842
facebook-github-bot
left a comment
There was a problem hiding this comment.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Tested using @houseroad's run_all_model_execution_test.sh test. Thanks @houseroad! |
Summary: Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's. For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future. Pull Request resolved: pytorch#35871 Differential Revision: D20961711 Pulled By: mruberry fbshipit-source-id: 232f62cf64caa4154eb2194969efa51d2082d842
Summary: Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's. For complex inputs to `abs` the result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. While `std::abs` returns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future. Pull Request resolved: pytorch#35871 Differential Revision: D20984456 Pulled By: mruberry fbshipit-source-id: 226445178f92f2b0292e92578656d98674a6aa20
Per title. A test is added to test_type_promotion for the behavior. This behavior is consistent with NumPy's.
For complex inputs to
absthe result is cast to float after the computation since the computation of abs must be performed on the original complex tensor. Whilestd::absreturns a float value when called on complex inputs, returning a FloatTensor directly would require additional loop instantiations in TensorIterator. This may be worthwhile to pursue in the future.Resolves #33567