Skip to content

Fix some tensor operators to return NotImplemented for invalid inputs#57934

Closed
asi1024 wants to merge 3 commits intopytorch:masterfrom
asi1024:fix-operator
Closed

Fix some tensor operators to return NotImplemented for invalid inputs#57934
asi1024 wants to merge 3 commits intopytorch:masterfrom
asi1024:fix-operator

Conversation

@asi1024
Copy link
Copy Markdown
Contributor

@asi1024 asi1024 commented May 10, 2021

Fixes #57719.

This PR fixes torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__} to return NotImplemented instead of raising a TypeError.

cc/ @mruberry: The first commit of this PR is the same as 1d209db excepts the commit message.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented May 10, 2021

💊 CI failures summary and remediations

As of commit 3069f58 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR that looks quite good.
I am not sure what test_binary_ufuncs is testing for these. In particular, does it properly cover all the reverse cases and make sure they return the right value?

{"__nonzero__", THPVariable_bool_scalar, METH_NOARGS, NULL},
{"__invert__", THPVariable_invert, METH_NOARGS, NULL},
{"__matmul__", castPyCFunctionWithKeywords(TypeError_to_NotImplemented_<THPVariable_matmul>), METH_VARARGS | METH_KEYWORDS, NULL},
{"__rmatmul__", castPyCFunctionWithKeywords(TypeError_to_NotImplemented_<THPVariable_matmul>), METH_VARARGS | METH_KEYWORDS, NULL},
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is THPVariable_matmul actually handling the reverse version properly?
In particular if two valid inputs are given to it and it does not throw an error.

@bdhirsh bdhirsh added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 10, 2021
@codecov
Copy link
Copy Markdown

codecov Bot commented May 10, 2021

Codecov Report

Merging #57934 (3069f58) into master (259d19a) will increase coverage by 0.01%.
The diff coverage is 84.61%.

@@            Coverage Diff             @@
##           master   #57934      +/-   ##
==========================================
+ Coverage   76.82%   76.84%   +0.01%     
==========================================
  Files        1986     1986              
  Lines      197448   197459      +11     
==========================================
+ Hits       151693   151735      +42     
+ Misses      45755    45724      -31     

@asi1024
Copy link
Copy Markdown
Contributor Author

asi1024 commented May 10, 2021

@albanD Thank you for your review!
I implemented __rmatmul__ method in _tensor.py to handle the reverse version properly, and also add a test! Could you take another look? 😃

@asi1024
Copy link
Copy Markdown
Contributor Author

asi1024 commented May 10, 2021

BTW, when I created this PR, I had a misunderstanding that I need to fix __rmatmul__, which is already supported, to return NotImplemented. The support of __rmatmul__ is outside of the meaning of the title of this PR. Should I separate it into another PR?

@albanD
Copy link
Copy Markdown
Collaborator

albanD commented May 10, 2021

Thanks for the update.
I think you can leave it here as it is not too big.

Could you make sure that the CI in green (in particular, flake is not happy with the assert False, you can replace it with raise RuntimeError...).
Also looks like your rmatmul implementation goes into an infinite recursion.

@mruberry
Copy link
Copy Markdown
Collaborator

I suggest just updating those that are easiest for the tests to pass and leaving a comment in the code about the remainders. This is a nice code cleanup but few users will appreciate the change

Copy link
Copy Markdown
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks for the tests

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@mruberry
Copy link
Copy Markdown
Collaborator

Thanks @asi1024!

@asi1024
Copy link
Copy Markdown
Contributor Author

asi1024 commented May 11, 2021

Thanks! 😄

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@albanD merged this pull request in 35521a2.

@albanD
Copy link
Copy Markdown
Collaborator

albanD commented May 12, 2021

I'm afraid this broke some master-only CI builds: https://app.circleci.com/pipelines/github/pytorch/pytorch/320932/workflows/2a61be9b-ecad-475b-bb9a-0c03b6c97d71/jobs/13402438

I think bfloat16 should be added to the cuda types.

Could you send a new PR with the same content as this one + the bfloat16 dtype on cuda?

(the bot is going to mark this one reverted very soon)

@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been reverted by 5e83c62.

@asi1024
Copy link
Copy Markdown
Contributor Author

asi1024 commented May 13, 2021

@albanD I reopened a PR in #58216!

krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
…ts (pytorch#57934)

Summary:
Fixes pytorch#57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as pytorch@1d209db excepts the commit message.

Pull Request resolved: pytorch#57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
facebook-github-bot pushed a commit that referenced this pull request May 19, 2021
…ts (#58216)

Summary:
Same as #57934. (cc/ albanD)

Pull Request resolved: #58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
…ts (pytorch#57934)

Summary:
Fixes pytorch#57719.

This PR fixes `torch.Tensor{__rsub__, __rdiv__, __rtruediv__, __pow__, __rmatmul__}` to return `NotImplemented` instead of raising a `TypeError`.

cc/ mruberry: The first commit of this PR is the same as pytorch@1d209db excepts the commit message.

Pull Request resolved: pytorch#57934

Reviewed By: mruberry

Differential Revision: D28351931

Pulled By: albanD

fbshipit-source-id: 985457a44dba24d2496794dfb8c1661cbcd4ff8f
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 25, 2026
…ts (pytorch#58216)

Summary:
Same as pytorch#57934. (cc/ albanD)

Pull Request resolved: pytorch#58216

Reviewed By: ailzhang

Differential Revision: D28494886

Pulled By: albanD

fbshipit-source-id: 380205867ee1cde90e1c6fcfe2a31749e1243530
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged open source Reverted triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Some torch.Tensor operators don’t return NotImplemented for invalid inputs

6 participants