Skip to content

Add quantized adaptive avgpool.#36813

Closed
kimishpatel wants to merge 7 commits intogh/kimishpatel/2/basefrom
gh/kimishpatel/2/head
Closed

Add quantized adaptive avgpool.#36813
kimishpatel wants to merge 7 commits intogh/kimishpatel/2/basefrom
gh/kimishpatel/2/head

Conversation

@kimishpatel
Copy link
Copy Markdown
Contributor

@kimishpatel kimishpatel commented Apr 17, 2020

Stack from ghstack:

  • Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
  • Map special cases of adaptive avg pool to avgpool.

Differential Revision: D21093837

- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
@dr-ci
Copy link
Copy Markdown

dr-ci Bot commented Apr 17, 2020

💊 Build failures summary and remediations

As of commit 0f52433 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 25 times.

a_pool_q = torch.quantize_per_tensor(a_pool, scale=scale, zero_point=zero_point,
dtype=torch.quint8)
np.testing.assert_array_almost_equal(a_pool_q.int_repr().numpy(),
qa_pool.int_repr().numpy(), decimal=0)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the decimal=0? Can we have a better reference? Alternatively, can we compare it to the results of the non-qnnpack quantized pool?

In numpy the assert_array_almost_equal with decimal=0 is the same as np.abs(difference) < 1.5. In server side we use precision to be 1. I wonder if qnnpack numerics are different when comparing to the server-side kernel.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not deliberate, but we are not comparing floating point ones here. It is comparing quantized output.

Copy link
Copy Markdown

@z-a-f z-a-f left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, less a comment

Comment on lines +236 to +237
IntArrayRef output_size) {
std::array<int64_t, 2> kernel_size;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does clang-format put these at the same indent? Weird.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad. Fixing this. Thanks for catching.

Comment on lines +274 to +275
return ((input_height % output_height == 0) &&
(input_width % output_width) == 0);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this true in practice? I thought the point of adaptive average pool was that you wanted to shrink a variable-sized input into a fixed-sized output.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this helps us reduce this to regular avgpool. If this condition is not met we fallback to the other slow path from here, https://github.com/pytorch/pytorch/pull/36813/files#diff-c227aae55653cf48955fdf36f92378afR286.
This is because qnnpack does not have adaptive avg pool implemented. I plan to implement this in PT qnnpack.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that this is safe, but my question is how often this is true? If this is an uncommon case, I don't think it's worth having an optimized implementation for it.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dont know the extent to which this is common but this is for our internal mobile vision models which I was trying to optimize. Thats what they have. And the unoptimized path is much slower.
Maybe we can remove this once I have qnnpack implementation for the adaptive avgpool but that will still be slower in degenerate cases like this.

* | | | | | | | |
* ---------------
*
* Thus we are going from width=7 height=5 input to height=2 width=3
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you switch from width,height to height,width? Is that an error?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the catch. I will make it consistent.

- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.

Differential Revision: [D21093837](https://our.internmc.facebook.com/intern/diff/D21093837/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 6de949a.

@facebook-github-bot facebook-github-bot deleted the gh/kimishpatel/2/head branch May 5, 2020 14:17
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
Pull Request resolved: pytorch#36813

- Changes to q_avgpool to map special cases of adaptive avgpool to avgpool.
- Map special cases of adaptive avg pool to avgpool.
ghstack-source-id: 103218410

Test Plan: QuantizedOps.test_adaptive_avgpool2d

Reviewed By: z-a-f

Differential Revision: D21093837

fbshipit-source-id: c45a03b597eaa59e1057561ee4e8e116ac138f8f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants