Skip to content

[pt1][quant] Fix empty_like to add more support when input_tensor has dtype as q(u)int8#27092

Closed
jianyuh wants to merge 1 commit intogh/jianyuh/33/basefrom
gh/jianyuh/33/head
Closed

[pt1][quant] Fix empty_like to add more support when input_tensor has dtype as q(u)int8#27092
jianyuh wants to merge 1 commit intogh/jianyuh/33/basefrom
gh/jianyuh/33/head

Conversation

@jianyuh
Copy link
Member

@jianyuh jianyuh commented Sep 30, 2019

Stack from ghstack:

We would like to add the support for empty_like for the case where input_tensor has dtype of q(u)int8 and options has a different type.

Differential Revision: D17672685

… dtype as q(u)int8

We would like to add the support for `empty_like` for the case where input_tensor has dtype of q(u)int8 and options has a different type.

Differential Revision: [D17672685](https://our.internmc.facebook.com/intern/diff/D17672685/)

[ghstack-poisoned]
jianyuh added a commit that referenced this pull request Sep 30, 2019
… dtype as q(u)int8

We would like to add the support for `empty_like` for the case where input_tensor has dtype of q(u)int8 and options has a different type.

Differential Revision: [D17672685](https://our.internmc.facebook.com/intern/diff/D17672685/)

ghstack-source-id: 91047735
Pull Request resolved: #27092
@jianyuh jianyuh requested a review from jerryzh168 September 30, 2019 20:18
if (self.is_quantized() && options.dtype() == self.dtype()) {
// We could check if dtype is still quantized? But then should we shift/scale
// the q_zero_point / q_scale or not?
TORCH_CHECK(!options.has_dtype() || options.dtype() == self.dtype(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this restriction in place?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for the case where the self.dtype() is quantized and options.dtype() is non-quantized, for example, when we apply:

auto buffer = at::zeros_like(output, output.options().dtype(at::kInt));

where output is a quantized tensor with kQUInt8 dtype, we should still be able to create an empty tensor with

at::empty(self.sizes(), options, use_memory_format);

(Note that we shouldn't use at::_empty_affine_quantized or at::_empty_per_channel_affine_quantized as options.dtype() is non-quantized).

With the previous version of code before this PR, we just report the errors.

}

if (self.is_quantized()) {
if (self.is_quantized() && options.dtype() == self.dtype()) {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is already a TORCH_CHECK later on. I think the check is better, assuming it has a good error message

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can remove that TORCH_CHECK. See my comment above. Just want to add the support for the case where self.is_quantized() but options.dtype() != self.dtype().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

actually, what about the case when self is not quantized and options.dtype is quantized? we should disallow that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a case when options.has_dtype() is false

Copy link
Member Author

@jianyuh jianyuh Oct 1, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point: It is a little messy to include all the possible logic. The basic idea is to check if we should generate the empty quantized tensor or empty normal tensor. Let me think it over about these cases.

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need more thought on the condition

@facebook-github-bot facebook-github-bot deleted the gh/jianyuh/33/head branch August 13, 2020 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants