Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops#50465
Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops#50465imaginary-person wants to merge 10 commits intopytorch:masterfrom
Conversation
💊 CI failures summary and remediationsAs of commit 52dcc72 (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
All the errors are related to |
| max_indices.device(), " for indices output"); | ||
| dim = maybe_wrap_dim(dim, self.dim()); | ||
| if (_dimreduce_return_trivial_no_ident(max, self, dim, keepdim, "max")) { | ||
| TORCH_CHECK(!self.is_complex(), "max does not support complex inputs."); |
There was a problem hiding this comment.
It's worth noting that dispatch handlers are invoked in the methods registered for min_stub & max_stub.
However, the tests fail in some conditional cases such as this one, in which these stubs are not being called!
We can't rely upon dispatch macros to raise exceptions if they're never invoked in the first place!
So, in these corner cases, we need dtype checks.
There was a problem hiding this comment.
Great, thanks for investigating!
Codecov Report
@@ Coverage Diff @@
## master #50465 +/- ##
==========================================
- Coverage 80.73% 80.73% -0.01%
==========================================
Files 1910 1910
Lines 207182 207158 -24
==========================================
- Hits 167270 167244 -26
- Misses 39912 39914 +2 |
This failure is not due to the code in this PR, as Jenkins builds for other PRs are still failing due to the same cause. |
The error messages returned for complex inputs to min/max ops should be informative. Added checks for clamp as well.
|
This is looking really good! Can you please also remove dtype checks for clamp in UnaryOps.cpp, and rely on dispatch throwing the errors? Dispatch is not enabled for compelx currently, so as far as I can see no dispatch changes are necessary. |
Thanks for your advice! I'll do that right away. |
Removed dtype checks for complex types as the dispatch macro used (AT_DISPATCH_ALL_TYPES_AND) doesn't allow complex types, and would default to error anyway. So, the dispatch macro can be relied upon for raising an exception for unsupported complex types.
ngimel
left a comment
There was a problem hiding this comment.
Let's wait for CI, otherwise looks good.
|
Hello @ngimel, it seems that the Codecov/patch check is insignificant because it's related to a line of code that does a dtype check for complex types if a particular conditional statement is true, as no dispatch macro is invoked for that condition. The Codecov/patch check failed because no test cases check for that condition, so that line of code was never visited at runtime. |
This unrelated issue was fixed in #50518 |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Hello @ngimel, please let me know if I should add a test pertaining to |
Tests to ensure that users receive a runtime exception for_aminmax() currently not being supported for complex types. A dispatch macro handles all cases except the case in which the tensor has a single element, and the dimension has been provided as an argument. In that particular case, a dtype check generates an exception. That dtype check was also added in TensorCompare.cpp in this PR.
Delete trailing spaces in comment
Remove whitespace from empty line
Remove extra whitespace
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Thanks for adding the test! |
Thanks for the opportunity to contribute, @ngimel! |
|
Internal failures are unrelated, I'm trying to land now. Thanks for the contribution! |
…ispatch for CPU min/max pointwise ops (pytorch#50465) Summary: Fixes pytorch#50064 **PROBLEM DESCRIPTION:** 1. Had not removed dtype checks for complex types in the previous PR (pytorch#50347) for this issue. These type-checks were added in pytorch#36377, but are no longer necessary, as we now rely upon dispatch macros to produce error messages. 2. dtype checks in `clamp_max()` and `clamp_min()` for complex inputs had not been removed either. 3. For min/max pointwise ops in TensorCompareKernel.cpp, complex dispatch had not been removed for min/max functions. ### **FIX DESCRIPTION:** **FIX SUMMARY:** 1. Removed dtype checks added in pytorch#36377, and added 3 more in TensorCompare.cpp. 2. Removed dtype checks for complex inputs in `clamp_max()` and `clamp_min()`. 3. Disabled complex dispatch for min/max pointwise ops in TensorCompareKernel.cpp. 4. Error messages in the exceptions raised due to min/max ops not being implemented are now checked for containing the text _not support_ (which can also be present in _not supported_), or _not implemented_, so one of them should be a part of error messages, in order for them to be informative. **REASON FOR NOT CHANGING DISPATCH FOR CUDA AND CLAMP OPS**: As for the CUDA min/max operations, their kernels do not seem to be compiled & dispatched for complex types anyway, so no further changes seem to be required. Basically, the dispatch macros currently being used don't have cases for complex types. For example, 1. the reduce CUDA ops use [AT_DISPATCH_ALL_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/c6d70b035970375e5ebd2682195748013af91b4e)](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L548-L575) in [ReduceMinMaxKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/ReduceMinMaxKernel.cu), and that macro doesn't allow complex types. 2. In [MinMaxElementwiseKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/MaxMinElementwiseKernel.cu), the CUDA pointwise ops use [`AT_DISPATCH_FLOATING_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/c6d70b035970375e5ebd2682195748013af91b4e)`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L240-L263) for non-integral & non-boolean types, and this marco doesn't have a case for complex types either. 3. [clamp CUDA ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/UnaryOpsKernel.cu#L170-L211) use `AT_DISPATCH_ALL_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/c6d70b035970375e5ebd2682195748013af91b4e)`, which doesn't have a case for complex types. Similarly, [CPU clamp min/max ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L428-L458) use the `AT_DISPATCH_ALL_TYPES_AND `dispatch macro, which doesn't have a case for complex types. **REASON FOR ADDING 3 dtype CHECKS:** There are a few cases in which the methods corresponding to `min_stub()` or `max_stub()` are not called, so dispatch macros don't get invoked, resulting in no exceptions being raised. Hence, `dtype` checks are necessary at 3 places to raise exceptions: 1. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L342 2. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L422 3. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L389 The first dtype check requirement can be verified from the following example Python code based on `test_complex_unsupported()`: ``` import unittest import torch class MyTestCase(unittest.TestCase): def test_1(self): t = torch.tensor((1 + 1j), device='cpu', dtype=torch.complex128) with self.assertRaises(Exception): torch.max(t, dim=0) if __name__ == '__main__': unittest.main() ``` Pull Request resolved: pytorch#50465 Reviewed By: mruberry Differential Revision: D25938106 Pulled By: ngimel fbshipit-source-id: 95e2df02ba8583fa3ce87d4a2fdcd60b912dda46
Fixes #50064
PROBLEM DESCRIPTION:
These type-checks were added in Disables complex min and max #36377, but are no longer necessary,
as we now rely upon dispatch macros to produce error messages.
clamp_max()andclamp_min()for complex inputs had not been removed either.FIX DESCRIPTION:
FIX SUMMARY:
clamp_max()andclamp_min().REASON FOR NOT CHANGING DISPATCH FOR CUDA AND CLAMP OPS:
As for the CUDA min/max operations, their kernels do not seem to be compiled & dispatched for complex types anyway, so no further changes seem to be required. Basically, the dispatch macros currently being used don't have cases for complex types.
For example,
the reduce CUDA ops use AT_DISPATCH_ALL_TYPES_AND2 in ReduceMinMaxKernel.cu, and that macro doesn't allow complex types.
In MinMaxElementwiseKernel.cu, the CUDA pointwise ops use
AT_DISPATCH_FLOATING_TYPES_AND2for non-integral & non-boolean types, and this marco doesn't have a case for complex types either.clamp CUDA ops use
AT_DISPATCH_ALL_TYPES_AND2, which doesn't have a case for complex types.Similarly, CPU clamp min/max ops use the
AT_DISPATCH_ALL_TYPES_ANDdispatch macro, which doesn't have a case for complex types.REASON FOR ADDING 3 dtype CHECKS:
There are a few cases in which the methods corresponding to
min_stub()ormax_stub()are not called, so dispatch macros don't get invoked, resulting in no exceptions being raised. Hence,dtypechecks are necessary at 3 places to raise exceptions:pytorch/aten/src/ATen/native/TensorCompare.cpp
Line 342 in 52dcc72
pytorch/aten/src/ATen/native/TensorCompare.cpp
Line 422 in 52dcc72
pytorch/aten/src/ATen/native/TensorCompare.cpp
Line 389 in 52dcc72
The first dtype check requirement can be verified from the following example Python code based on
test_complex_unsupported():