[pt1][quant] Fix empty_like to add more support when input_tensor has dtype as q(u)int8#27092
[pt1][quant] Fix empty_like to add more support when input_tensor has dtype as q(u)int8#27092jianyuh wants to merge 1 commit intogh/jianyuh/33/basefrom
Conversation
… dtype as q(u)int8 We would like to add the support for `empty_like` for the case where input_tensor has dtype of q(u)int8 and options has a different type. Differential Revision: [D17672685](https://our.internmc.facebook.com/intern/diff/D17672685/) [ghstack-poisoned]
… dtype as q(u)int8 We would like to add the support for `empty_like` for the case where input_tensor has dtype of q(u)int8 and options has a different type. Differential Revision: [D17672685](https://our.internmc.facebook.com/intern/diff/D17672685/) ghstack-source-id: 91047735 Pull Request resolved: #27092
| if (self.is_quantized() && options.dtype() == self.dtype()) { | ||
| // We could check if dtype is still quantized? But then should we shift/scale | ||
| // the q_zero_point / q_scale or not? | ||
| TORCH_CHECK(!options.has_dtype() || options.dtype() == self.dtype(), |
There was a problem hiding this comment.
Why is this restriction in place?
There was a problem hiding this comment.
I think for the case where the self.dtype() is quantized and options.dtype() is non-quantized, for example, when we apply:
auto buffer = at::zeros_like(output, output.options().dtype(at::kInt));
where output is a quantized tensor with kQUInt8 dtype, we should still be able to create an empty tensor with
at::empty(self.sizes(), options, use_memory_format);
(Note that we shouldn't use at::_empty_affine_quantized or at::_empty_per_channel_affine_quantized as options.dtype() is non-quantized).
With the previous version of code before this PR, we just report the errors.
| } | ||
|
|
||
| if (self.is_quantized()) { | ||
| if (self.is_quantized() && options.dtype() == self.dtype()) { |
There was a problem hiding this comment.
There is already a TORCH_CHECK later on. I think the check is better, assuming it has a good error message
There was a problem hiding this comment.
I can remove that TORCH_CHECK. See my comment above. Just want to add the support for the case where self.is_quantized() but options.dtype() != self.dtype().
There was a problem hiding this comment.
actually, what about the case when self is not quantized and options.dtype is quantized? we should disallow that
There was a problem hiding this comment.
I think there is a case when options.has_dtype() is false
There was a problem hiding this comment.
Good point: It is a little messy to include all the possible logic. The basic idea is to check if we should generate the empty quantized tensor or empty normal tensor. Let me think it over about these cases.
jerryzh168
left a comment
There was a problem hiding this comment.
need more thought on the condition
Stack from ghstack:
We would like to add the support for
empty_likefor the case where input_tensor has dtype of q(u)int8 and options has a different type.Differential Revision: D17672685