Optimize implementation of torch.pow#46830
Conversation
b58e34a to
de7da46
Compare
💊 CI failures summary and remediationsAs of commit dfe8670175 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
e0f9115 to
0b9deb5
Compare
|
adding @zou3519, as this PR is adding named tensor support to the operator |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 I just realized that this PR intersects with your fix for the accidental complex promotion. Do you think you could work these changes into your patchset in some way? Otherwise I'll wait for your fix to land and rebase this on top of yours. |
0b9deb5 to
37c25a3
Compare
|
I rebased this PR since it has been a while. BTW, if there is anything else I can do for this PR, I‘d like to be the help. |
|
@ezyang sorry I completely missed this PR. let me merge my |
37c25a3 to
929ad85
Compare
929ad85 to
d159ac7
Compare
|
Hi @Kiyosora can you rebase this PR on the latest master? |
d159ac7 to
4da9ade
Compare
Sure, I've rebase this PR on the latest master. @anjali411 |
There was a problem hiding this comment.
@Kiyosora can you please retain this check to exp.equal(0.0) (as in master)?
There was a problem hiding this comment.
this is no longer needed. let's remove this!
dfe8670 to
779026a
Compare
779026a to
d014a9f
Compare
facebook-github-bot
left a comment
There was a problem hiding this comment.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
@anjali411 merged this pull request in d140ca8. |
Summary: - Related with pytorch#44937 - Use `resize_output` instead of `resize_as` - Tuning the `native_functions.yaml`, move the inplace variant `pow_` next to the other `pow` entries Pull Request resolved: pytorch#46830 Reviewed By: mrshenli Differential Revision: D24567702 Pulled By: anjali411 fbshipit-source-id: a352422c9d4e356574dbfdf21fb57f7ca7c6075d
resize_outputinstead ofresize_asnative_functions.yaml, move the inplace variantpow_next to the otherpowentries