Skip to content

Fake quantization enhancements for QAT/PTQ support- fix tests#26876

Closed
raghuramank100 wants to merge 3 commits intogh/raghuramank100/39/basefrom
gh/raghuramank100/39/head
Closed

Fake quantization enhancements for QAT/PTQ support- fix tests#26876
raghuramank100 wants to merge 3 commits intogh/raghuramank100/39/basefrom
gh/raghuramank100/39/head

Conversation

@raghuramank100
Copy link
Copy Markdown
Contributor

@raghuramank100 raghuramank100 commented Sep 26, 2019

Stack from ghstack:

Add ability to turn fake quantization and observers independently.

Differential Revision: D17592961

Add ability to turn fake quantization and observers independently.

Differential Revision: [D17592961](https://our.internmc.facebook.com/intern/diff/D17592961/)

[ghstack-poisoned]
Copy link
Copy Markdown
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests on this one fail with autograd error which seems to be related. But it's probably from another PR in the stack

def enable_fake_quant(self):
self.observer.enable()
self.weight_fake_quant.enable()
self.observer.enable_fake_quant()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel the ordering of the PRs is weird as you delete it later :) but it's ok

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, on hind-sight having fewer PRs would have been better.

…sts"

Add ability to turn fake quantization and observers independently.

Differential Revision: [D17592961](https://our.internmc.facebook.com/intern/diff/D17592961/)

[ghstack-poisoned]
…sts"

Add ability to turn fake quantization and observers independently.

Differential Revision: [D17592961](https://our.internmc.facebook.com/intern/diff/D17592961/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in 9a5e2e8.

pdlive215 pushed a commit to pdlive215/pytorch that referenced this pull request Nov 27, 2019
…h#26876)

Summary:
Pull Request resolved: pytorch#26876

Add ability to turn fake quantization and observers independently.
ghstack-source-id: 90892132

Test Plan: buck test caffe2/test:quantized -- 'test_conv_bn_relu \(test_qat\.IntrinsicQATModuleTest\)' --print-passing-details

Differential Revision: D17592961

fbshipit-source-id: 24c60c94ed7c6c9fa55c634a8545731614e4f52f
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…h#26876)

Summary:
Pull Request resolved: pytorch#26876

Add ability to turn fake quantization and observers independently.
ghstack-source-id: 90892132

Test Plan: buck test caffe2/test:quantized -- 'test_conv_bn_relu \(test_qat\.IntrinsicQATModuleTest\)' --print-passing-details

Differential Revision: D17592961

fbshipit-source-id: 24c60c94ed7c6c9fa55c634a8545731614e4f52f
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants