[complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select#51281
Conversation
💊 CI failures summary and remediationsAs of commit 8ac029d (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #51281 +/- ##
==========================================
- Coverage 80.84% 80.84% -0.01%
==========================================
Files 1931 1931
Lines 210884 210890 +6
==========================================
- Hits 170496 170493 -3
- Misses 40388 40397 +9 |
| sample_inputs_func=sample_inputs_masked_scatter, | ||
| skips=( | ||
| # _th_masked_fill_bool_ not supported for Complex Types. | ||
| SkipInfo('TestGradients', 'test_fn_grad', |
There was a problem hiding this comment.
you could instead just set test_complex_grad=False
There was a problem hiding this comment.
Actually, masked_fill_ is ported for CPU and it works.
So I thought it would be better to just disable CUDA variant, instead of disabling both of them.
| 'reflection_pad1d_backward', 'reflection_pad2d_backward', | ||
| 'replication_pad1d', 'replication_pad2d', 'replication_pad3d', | ||
| 'replication_pad1d_backward', 'replication_pad2d_backward', 'replication_pad3d_backward', | ||
| 'masked_scatter', 'masked_select' |
There was a problem hiding this comment.
we shouldn't enable masked_scatter since it's dependent on _th_masked_fill_bool_ which is not supported for complex
|
|
||
| # TODO: update test when masked scatter is supported for complex | ||
| # and cpu supports half | ||
| if (dt == torch.half and self.device_type == 'cpu') or dt.is_complex: |
There was a problem hiding this comment.
you probably still wanna retain this for if (dt == torch.half and self.device_type == 'cpu') right?
There was a problem hiding this comment.
oh nvm you also enable masked_scatter for torch.half on CPU
There was a problem hiding this comment.
Oh forgot to add it in the description but have also added support for half on CPU.
There was a problem hiding this comment.
Updated the description.
facebook-github-bot
left a comment
There was a problem hiding this comment.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 merged this pull request in a88e1d3. |
… masked_scatter and masked_select (pytorch#51281) Summary: Reference: pytorch#33152 Changes * Enable complex support for masked_scatter * Enable half support for masked_scatter CPU * Enable complex autograd support for masked_scatter CPU and masked_select (both CPU and CUDA). **Note**: Complex Support for masked_scatter CUDA is disabled as it depends on `masked_fill` which is yet to be ported to ATen. Pull Request resolved: pytorch#51281 Reviewed By: ailzhang Differential Revision: D26127561 Pulled By: anjali411 fbshipit-source-id: 6284926b934942213c5dfc24b5bcc8538d0231af
Reference: #33152
Changes
Note:
Complex Support for masked_scatter CUDA is disabled as it depends on
masked_fillwhich is yet to be ported to ATen.