Skip to content

[complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select#51281

Closed
kshitij12345 wants to merge 2 commits intopytorch:masterfrom
kshitij12345:support/complex/masked_scatter-masked_select
Closed

[complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select#51281
kshitij12345 wants to merge 2 commits intopytorch:masterfrom
kshitij12345:support/complex/masked_scatter-masked_select

Conversation

@kshitij12345
Copy link
Copy Markdown
Collaborator

@kshitij12345 kshitij12345 commented Jan 28, 2021

Reference: #33152

Changes

  • Enable complex support for masked_scatter
  • Enable half support for masked_scatter CPU
  • Enable complex autograd support for masked_scatter CPU and masked_select (both CPU and CUDA).

Note:
Complex Support for masked_scatter CUDA is disabled as it depends on masked_fill which is yet to be ported to ATen.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 28, 2021

💊 CI failures summary and remediations

As of commit 8ac029d (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@kshitij12345 kshitij12345 marked this pull request as ready for review January 28, 2021 15:08
@codecov
Copy link
Copy Markdown

codecov Bot commented Jan 28, 2021

Codecov Report

Merging #51281 (8ac029d) into master (392abde) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #51281      +/-   ##
==========================================
- Coverage   80.84%   80.84%   -0.01%     
==========================================
  Files        1931     1931              
  Lines      210884   210890       +6     
==========================================
- Hits       170496   170493       -3     
- Misses      40388    40397       +9     

sample_inputs_func=sample_inputs_masked_scatter,
skips=(
# _th_masked_fill_bool_ not supported for Complex Types.
SkipInfo('TestGradients', 'test_fn_grad',
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could instead just set test_complex_grad=False

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, masked_fill_ is ported for CPU and it works.

So I thought it would be better to just disable CUDA variant, instead of disabling both of them.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh I see

'reflection_pad1d_backward', 'reflection_pad2d_backward',
'replication_pad1d', 'replication_pad2d', 'replication_pad3d',
'replication_pad1d_backward', 'replication_pad2d_backward', 'replication_pad3d_backward',
'masked_scatter', 'masked_select'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we shouldn't enable masked_scatter since it's dependent on _th_masked_fill_bool_ which is not supported for complex

Comment thread test/test_torch.py

# TODO: update test when masked scatter is supported for complex
# and cpu supports half
if (dt == torch.half and self.device_type == 'cpu') or dt.is_complex:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably still wanna retain this for if (dt == torch.half and self.device_type == 'cpu') right?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh nvm you also enable masked_scatter for torch.half on CPU

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh forgot to add it in the description but have also added support for half on CPU.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the description.

Copy link
Copy Markdown
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me after masked_scatter backward is disabled for complex edit: masked_fill_ is ported on CPU, so the backward works on only CPU, so it's fine to enable complex autograd

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in a88e1d3.

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
… masked_scatter and masked_select (pytorch#51281)

Summary:
Reference: pytorch#33152

Changes
* Enable complex support for masked_scatter
* Enable half support for masked_scatter CPU
* Enable complex autograd support for masked_scatter CPU and masked_select (both CPU and CUDA).

**Note**:
Complex Support for masked_scatter CUDA is disabled as it depends on `masked_fill` which is yet to be ported to ATen.

Pull Request resolved: pytorch#51281

Reviewed By: ailzhang

Differential Revision: D26127561

Pulled By: anjali411

fbshipit-source-id: 6284926b934942213c5dfc24b5bcc8538d0231af
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants