Skip to content

Fix all factory invocations in quantized to correctly propagate options.#26966

Closed
ezyang wants to merge 1 commit intogh/ezyang/390/basefrom
gh/ezyang/390/head
Closed

Fix all factory invocations in quantized to correctly propagate options.#26966
ezyang wants to merge 1 commit intogh/ezyang/390/basefrom
gh/ezyang/390/head

Conversation

@ezyang
Copy link
Contributor

@ezyang ezyang commented Sep 27, 2019

Stack from ghstack:

Without this, you may allocate intermediates which are non-variables
when you should allocate variables.

Should help with discussion in #26868.

Signed-off-by: Edward Z. Yang ezyang@fb.com

Differential Revision: D17629863

Without this, you may allocate intermediates which are non-variables
when you should allocate variables.

Should help with discussion in #26868.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

[ghstack-poisoned]
@pytorchbot pytorchbot added module: operators oncall: quantization Quantization support in PyTorch labels Sep 27, 2019
ezyang added a commit that referenced this pull request Sep 27, 2019
Without this, you may allocate intermediates which are non-variables
when you should allocate variables.

Should help with discussion in #26868.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

ghstack-source-id: 95ce6f8
Pull Request resolved: #26966
bias_fp32 = bias_in.value();
} else {
bias_fp32 = at::zeros(out_ch, at::kFloat);
bias_fp32 = at::zeros(out_ch, weight.options().dtype(at::kFloat));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I patched this PR but it doesn't seem to fix the error on cpp_custom_type_hack() code path...
It still fails at !is_variable() check in empty_cpu:

#0  0x00000000006e14de in at::native::empty_cpu(c10::ArrayRef<long>, c10::TensorOptions const&, c10::optional<c10::MemoryFormat>) ()
#1  0x00000000007cf8cf in at::CPUType::empty(c10::ArrayRef<long>, c10::TensorOptions const&, c10::optional<c10::MemoryFormat>) ()
#2  0x00000000007a364d in at::Tensor at::cpp_custom_type_hack::create<PackedConvWeightsQnnp>(std::unique_ptr<PackedConvWeightsQnnp, std::default_delete<PackedConvWeightsQnnp> >, c10::TensorOptions) ()
#3  0x00000000007a2347 in at::native::(anonymous namespace)::QConvPackWeightInt8::qnnpack_conv_prepack(at::Tensor, c10::optional<at::Tensor>, c10::List<long>, c10::List<long>, c10::List<long>, long) [clone .isra.280] ()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick feedback. I think this patch is a strict improvement, but I need to kill more spots.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Sep 27, 2019
…ns. (#26966)

Summary:
Pull Request resolved: pytorch/pytorch#26966

Without this, you may allocate intermediates which are non-variables
when you should allocate variables.

Should help with discussion in #26868.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D17629863

Pulled By: ezyang

fbshipit-source-id: 0dd9b218d3fc2dbbbbd9b1712db8ab4dac16ea22
@facebook-github-bot
Copy link
Contributor

@ezyang merged this pull request in b2e43e4.

@facebook-github-bot facebook-github-bot deleted the gh/ezyang/390/head branch October 28, 2019 22:12
pdlive215 pushed a commit to pdlive215/pytorch that referenced this pull request Nov 27, 2019
…ns. (pytorch#26966)

Summary:
Pull Request resolved: pytorch#26966

Without this, you may allocate intermediates which are non-variables
when you should allocate variables.

Should help with discussion in pytorch#26868.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D17629863

Pulled By: ezyang

fbshipit-source-id: 0dd9b218d3fc2dbbbbd9b1712db8ab4dac16ea22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: quantization Quantization support in PyTorch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants