Improve complex support in common_nn test machinery#50593
Closed
peterbell10 wants to merge 8 commits intogh/peterbell10/41/basefrom
Closed
Improve complex support in common_nn test machinery#50593peterbell10 wants to merge 8 commits intogh/peterbell10/41/basefrom
peterbell10 wants to merge 8 commits intogh/peterbell10/41/basefrom
Conversation
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. [ghstack-poisoned]
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. [ghstack-poisoned]
Contributor
|
we had a similar discussion in this PR adding complex support for L1 loss function #49912 |
anjali411
reviewed
Jan 15, 2021
anjali411
reviewed
Jan 15, 2021
| else: | ||
| return tensor | ||
|
|
||
| def to_half(x): |
Contributor
There was a problem hiding this comment.
I don't think we should add to_half for complex tensors, since the support for torch.complex32 is bare minimum right now. In fact, @mruberry and I both agreed we should probably disable torch.complex32 for 1.8 release.
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. [ghstack-poisoned]
Collaborator
Author
|
Rebased on #49912 after it was merged and removed the reference to |
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. [ghstack-poisoned]
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. [ghstack-poisoned]
mruberry
approved these changes
Jan 19, 2021
Collaborator
mruberry
left a comment
There was a problem hiding this comment.
Looks OK to me. Let's let @anjali411 take a look, too.
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. Differential Revision: [D25954050](https://our.internmc.facebook.com/intern/diff/D25954050) [ghstack-poisoned]
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. Differential Revision: [D25954050](https://our.internmc.facebook.com/intern/diff/D25954050) [ghstack-poisoned]
peterbell10
pushed a commit
to peterbell10/pytorch
that referenced
this pull request
Jan 21, 2021
ghstack-source-id: 9e8b117 Pull Request resolved: pytorch#50593
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. Differential Revision: [D25954050](https://our.internmc.facebook.com/intern/diff/D25954050) [ghstack-poisoned]
peterbell10
commented
Jan 21, 2021
| input = input + (kwargs['target_fn'](),) | ||
|
|
||
| args_variable, kwargs_variable = create_input(input) | ||
| args_variable, kwargs_variable = create_input(input, dtype=input_dtype) |
Collaborator
Author
There was a problem hiding this comment.
@anjali411 this should fix the test failure in #50594.
Contributor
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
Summary: Pull Request resolved: pytorch#50593 There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex types. So `get_gpu_type` and `get_cpu_type` are broken for complex tensors. Also found a few places that explicitly cast inputs to floating point types, which would drop the imaginary component before running the test. Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D25954050 Pulled By: mruberry fbshipit-source-id: 1fa8e5af233aa095c839d5e2f860564baaf92aef
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
There are no equivalent to torch.FloatTensor, torch.cuda.FloatTensor for complex
types. So
get_gpu_typeandget_cpu_typeare broken for complex tensors.Also found a few places that explicitly cast inputs to floating point types,
which would drop the imaginary component before running the test.
Differential Revision: D25954050