Avoid dynamic dispatch inside the omp loop in AdaptiveAvgPool2d#20366
Avoid dynamic dispatch inside the omp loop in AdaptiveAvgPool2d#20366crcrpar wants to merge 3 commits intopytorch:masterfrom
Conversation
|
@crcrpar tests are failing: |
|
@zou3519 Sorry, I forgot the existence of JIT tests. |
|
I had a bit busy weekdays and am sorry for being late. I think I fixed the problem and the |
|
I found https://github.com/pytorch/pytorch/wiki/TH-to-ATen-porting-guide#dispatch-and-openmp is a bit outdated because now ATen has |
|
Quick tip: you have to "dismiss" the review for it to show up in my review inbox |
|
I think the problem discussed in the porting guide is obsolete now, because lambdas work inside lambdas. So you don't need to make separate functions anymore. |
|
But I won't tell you to fix it! No problem with more functions. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ezyang is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: This PR changes CPU implementation of `AdaptiveAveragePool2D` by - move dispatch to outside the OpenMP loop - support fp16 Pull Request resolved: pytorch/pytorch#20366 Differential Revision: D15456069 Pulled By: ezyang fbshipit-source-id: 00fa2916f8b136af9f5c8b5db0eca4619f9f5bac
|
OK, thank you very much. |
This PR changes CPU implementation of
AdaptiveAveragePool2Dby