AvgPool2d accepts 0-dim batch size.#40694
Conversation
💊 CI failures summary and remediationsAs of commit a5b2628 (more details on the Dr. CI page):
XLA failureJob pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 27 times. |
There was a problem hiding this comment.
Thanks for the amazing work!
I honestly think these many extra zero checks in the CUDA file is pretty confusing (probably because the current code structure is not good enough and already too complicated).
I would suggest to add a new function in Pool.h header file that checks the zero dim with all the logic, like 3d, 4d tensor, channels-first and channels-last. Then we can call this function near the beginning of cpu template and cuda template function with an early return. With that, we can keep the current cuda kernel logic much cleaner.
Edit: this function can also be reused later in max_pool, adaptive, fractional poolings.
|
@xwang233 can you review this again? The test failure does not seem like my fault. |
xwang233
left a comment
There was a problem hiding this comment.
Thanks, overall it looks good. Can you also rebase your branch? That may resolve the test failures.
…e pool2d_shape_check function instead of computing it inside the function
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
vulkan errors are real. |
facebook-github-bot
left a comment
There was a problem hiding this comment.
@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
|
Hi @v0dro! Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention. You currently have a record in our system, but we do not have a signature on file. In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks! |
|
Can this be merged if approved? |
|
I'm trying to merge but the land is failing. |
|
@ngimel how about now? |
|
I've been trying to land it for the whole weekend, I can't do it for unknown reasons. |
|
Ahh.. let me try sending a fresh PR. |
hello,v0dro.I have tried your method. Now AvgPool2d could accept 0-dim batch size tensor.But when I tried use MaxPool2d, it still not work.Do you know which else I need to alter in torch src code? If MaxPool2d could accept 0-dim batch size tensor? |
Summary: Resubmitting pytorch#40694 since it could not be landed for some reason. CC ngimel Pull Request resolved: pytorch#47426 Reviewed By: mruberry Differential Revision: D24941350 Pulled By: ngimel fbshipit-source-id: b7e50346d86eb63aaaf4fdd5ee71fafee2d0b476
This PR is a partial fix for #12013 for allowing
AvgPool2dto accept 0-dim batch sizedtensors.