Skip to content

Allow per-channel QTensor accept any floating type for scales#26676

Closed
dzhulgakov wants to merge 2 commits intogh/dzhulgakov/9/basefrom
gh/dzhulgakov/9/head
Closed

Allow per-channel QTensor accept any floating type for scales#26676
dzhulgakov wants to merge 2 commits intogh/dzhulgakov/9/basefrom
gh/dzhulgakov/9/head

Conversation

@dzhulgakov
Copy link
Copy Markdown
Collaborator

@dzhulgakov dzhulgakov commented Sep 23, 2019

Stack from ghstack:

Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Differential Revision: D17537051

Copy link
Copy Markdown
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

…les"


Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Differential Revision: [D17537051](https://our.internmc.facebook.com/intern/diff/D17537051)

[ghstack-poisoned]
dzhulgakov pushed a commit that referenced this pull request Sep 24, 2019
zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 24, 2019
Summary:
Pull Request resolved: pytorch/pytorch#26676

Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Test Plan: Imported from OSS

Differential Revision: D17537051

Pulled By: dzhulgakov

fbshipit-source-id: e955ccdb5b4691828a559dc8f1ed7de54b6d12c4
@facebook-github-bot
Copy link
Copy Markdown
Contributor

@dzhulgakov merged this pull request in ade60f8.

@facebook-github-bot facebook-github-bot deleted the gh/dzhulgakov/9/head branch October 28, 2019 22:08
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…h#26676)

Summary:
Pull Request resolved: pytorch#26676

Just makes it more user-friendly to be able to pass any floating point or int point values to scales or zero_points for per-channel quantization. It matches behavior or per tensor quantizer where those arguments are scalars (not tensors) and thus automatic casting is applied.

Test Plan: Imported from OSS

Differential Revision: D17537051

Pulled By: dzhulgakov

fbshipit-source-id: e955ccdb5b4691828a559dc8f1ed7de54b6d12c4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants