Skip to content

AvgPool2d accepts 0-dim batch size.#40694

Closed
v0dro wants to merge 11 commits intopytorch:masterfrom
v0dro:avg-pool-0-dim
Closed

AvgPool2d accepts 0-dim batch size.#40694
v0dro wants to merge 11 commits intopytorch:masterfrom
v0dro:avg-pool-0-dim

Conversation

@v0dro
Copy link
Copy Markdown
Contributor

@v0dro v0dro commented Jun 29, 2020

This PR is a partial fix for #12013 for allowing AvgPool2dto accept 0-dim batch sizedtensors.

@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Jun 29, 2020

💊 CI failures summary and remediations

As of commit a5b2628 (more details on the Dr. CI page):


  • 1/1 failures introduced in this PR

XLA failure

Job pytorch_xla_linux_bionic_py3_6_clang9_test is failing. Please create an issue with title prefixed by [PT_BREAK] in pytorch/xla and link to to this PR. If you have questions, please reach out to @ailzhang / @dlibenzi / @JackCaoG.


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 27 times.

Copy link
Copy Markdown
Collaborator

@xwang233 xwang233 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the amazing work!

I honestly think these many extra zero checks in the CUDA file is pretty confusing (probably because the current code structure is not good enough and already too complicated).

I would suggest to add a new function in Pool.h header file that checks the zero dim with all the logic, like 3d, 4d tensor, channels-first and channels-last. Then we can call this function near the beginning of cpu template and cuda template function with an early return. With that, we can keep the current cuda kernel logic much cleaner.

Edit: this function can also be reused later in max_pool, adaptive, fractional poolings.

@mruberry mruberry added module: batching triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jul 7, 2020
@v0dro
Copy link
Copy Markdown
Contributor Author

v0dro commented Oct 22, 2020

@xwang233 can you review this again? The test failure does not seem like my fault.

Copy link
Copy Markdown
Collaborator

@xwang233 xwang233 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, overall it looks good. Can you also rebase your branch? That may resolve the test failures.

Comment thread aten/src/ATen/native/cuda/AveragePool2d.cu Outdated
Comment thread aten/src/ATen/native/cuda/AveragePool2d.cu
Comment thread aten/src/ATen/native/Pool.h Outdated
Comment thread aten/src/ATen/native/cuda/AveragePool2d.cu Outdated
Comment thread aten/src/ATen/native/cuda/AveragePool2d.cu Outdated
…e pool2d_shape_check function instead of computing it inside the function
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ngimel
Copy link
Copy Markdown
Collaborator

ngimel commented Oct 28, 2020

vulkan errors are real.

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Copy Markdown
Contributor

Hi @v0dro!

Thank you for your pull request. We require contributors to sign our Contributor License Agreement, and yours needs attention.

You currently have a record in our system, but we do not have a signature on file.

In order for us to review and merge your code, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

If you have received this in error or have any questions, please contact us at cla@fb.com. Thanks!

@v0dro
Copy link
Copy Markdown
Contributor Author

v0dro commented Oct 31, 2020

Can this be merged if approved?

@ngimel
Copy link
Copy Markdown
Collaborator

ngimel commented Oct 31, 2020

I'm trying to merge but the land is failing.

@v0dro
Copy link
Copy Markdown
Contributor Author

v0dro commented Nov 2, 2020

@ngimel how about now?

@ngimel
Copy link
Copy Markdown
Collaborator

ngimel commented Nov 2, 2020

I've been trying to land it for the whole weekend, I can't do it for unknown reasons.

@v0dro
Copy link
Copy Markdown
Contributor Author

v0dro commented Nov 5, 2020

Ahh.. let me try sending a fresh PR.

@v0dro v0dro closed this Nov 5, 2020
facebook-github-bot pushed a commit that referenced this pull request Nov 14, 2020
Summary:
Resubmitting #40694 since it could not be landed for some reason.

CC ngimel

Pull Request resolved: #47426

Reviewed By: mruberry

Differential Revision: D24941350

Pulled By: ngimel

fbshipit-source-id: b7e50346d86eb63aaaf4fdd5ee71fafee2d0b476
@SarchWu
Copy link
Copy Markdown

SarchWu commented Mar 4, 2021

This PR is a partial fix for #12013 for allowing AvgPool2dto accept 0-dim batch sizedtensors.

hello,v0dro.I have tried your method. Now AvgPool2d could accept 0-dim batch size tensor.But when I tried use MaxPool2d, it still not work.Do you know which else I need to alter in torch src code? If MaxPool2d could accept 0-dim batch size tensor?

laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Resubmitting pytorch#40694 since it could not be landed for some reason.

CC ngimel

Pull Request resolved: pytorch#47426

Reviewed By: mruberry

Differential Revision: D24941350

Pulled By: ngimel

fbshipit-source-id: b7e50346d86eb63aaaf4fdd5ee71fafee2d0b476
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed module: batching open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants