Skip to content

support complex types for tanh_backward_cpu#37791

Closed
kshitij12345 wants to merge 3 commits intopytorch:masterfrom
kshitij12345:develop/complex/tanh_backward
Closed

support complex types for tanh_backward_cpu#37791
kshitij12345 wants to merge 3 commits intopytorch:masterfrom
kshitij12345:develop/complex/tanh_backward

Conversation

@kshitij12345
Copy link
Copy Markdown
Collaborator

@kshitij12345 kshitij12345 commented May 4, 2020

Closes: #37701

TO-DO:

  • Add Tests

@kshitij12345 kshitij12345 force-pushed the develop/complex/tanh_backward branch from d343a30 to 9125f0c Compare May 4, 2020 21:40
@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@anjali411
Please review.
Also is there another operator which has complex dtype test enabled in test_autograd.py for reference.

@anjali411
Copy link
Copy Markdown
Contributor

@kshitij12345 this looks good! I am working on adding a whitelist for testing backwards for some C->C functions (will have a PR soon). so once that's pushed, you would be easily able to enable the test for complex dtype by adding tanh to the whitelist.

should have the PR up by tomorrow

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@anjali411 Sounds good.

@ngimel ngimel requested a review from anjali411 May 5, 2020 00:17
@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 5, 2020
@anjali411 anjali411 added the module: complex Related to complex number support in PyTorch label May 5, 2020
@anjali411
Copy link
Copy Markdown
Contributor

hey @kshitij12345 you can add tanh to this whitelist and then this test should run with complex as well.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

kshitij12345 commented May 8, 2020

@anjali411
Is there any simple way to disable it for case when device is cuda and dtype is complex as tanh forward itself doesn't support cuda device.

Or should we add another complex_cpu list and update the test generation logic?

Would take care of tanh_cuda and tanh_backward_cuda in another PR.

@anjali411
Copy link
Copy Markdown
Contributor

@anjali411
Is there any simple way to disable it for case when device is cuda and dtype is complex as tanh forward itself doesn't support cuda device.

Or should we add another complex_cpu list and update the test generation logic?

Would take care of tanh_cuda and tanh_backward_cuda in another PR.

Generally we could add "Expectedcudafailure" (used here) if it were not working for floating and complex dtypes. And we might need to add something for that later, but for now I think we can just add a new test for complex tanh_complex, add it to the whitelist and add "Expectedcudafailure" for it. And then that test would fail when we add tanh_backward for cuda so it would remind us to update that test

@kshitij12345 kshitij12345 force-pushed the develop/complex/tanh_backward branch from 9125f0c to 9961106 Compare May 11, 2020 13:34
Comment thread test/test_autograd.py Outdated
@anjali411
Copy link
Copy Markdown
Contributor

Also, would you like to work on adding cuda support for complex for a few other ops as well?

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

Also, would you like to work on adding cuda support for complex for a few other ops as well?

Sure would. :)

@anjali411
Copy link
Copy Markdown
Contributor

Also, would you like to work on adding cuda support for complex for a few other ops as well?

Sure would. :)

great! let's follow up on this on slack

@kshitij12345 kshitij12345 force-pushed the develop/complex/tanh_backward branch from 9961106 to 5fdac7a Compare May 14, 2020 16:16
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented May 14, 2020

💊 CI failures summary and remediations

As of commit cd2d446 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 4 times.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

kshitij12345 commented May 14, 2020

@anjali411 PTAL :)

great! let's follow up on this on slack

I applied for slack invite, but sadly haven't received any response.

@kshitij12345
Copy link
Copy Markdown
Collaborator Author

@anjali411 Gentle ping.

});
});
if (isComplexType(iter.dtype())) {
AT_DISPATCH_COMPLEX_TYPES(iter.dtype(), "tanh_backward_cpu", [&]() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could combine this in future when AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES starts using c10::complex
cc. @zasdfgbnm

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@anjali411 merged this pull request in 09c430a.

facebook-github-bot pushed a commit that referenced this pull request May 20, 2020
Summary:
Builds on #37791
Pull Request resolved: #38786

Differential Revision: D21666138

Pulled By: anjali411

fbshipit-source-id: cbd313b8fd21109aadd614c60259b9dc505771a5
@kshitij12345 kshitij12345 deleted the develop/complex/tanh_backward branch May 27, 2020 15:30
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Closes: pytorch#37701

TO-DO:
* [x] Add Tests
Pull Request resolved: pytorch#37791

Differential Revision: D21619827

Pulled By: anjali411

fbshipit-source-id: 0919ec80168a7f8b8092da8d39b8bc6f519d3440
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…8786)

Summary:
Builds on pytorch#37791
Pull Request resolved: pytorch#38786

Differential Revision: D21666138

Pulled By: anjali411

fbshipit-source-id: cbd313b8fd21109aadd614c60259b9dc505771a5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: complex Related to complex number support in PyTorch open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Implement tanh_backward for complex dtypes

6 participants