Skip to content

Per-channel baseline#26516

Closed
raghuramank100 wants to merge 19 commits intogh/raghuramank100/26/basefrom
gh/raghuramank100/26/head
Closed

Per-channel baseline#26516
raghuramank100 wants to merge 19 commits intogh/raghuramank100/26/basefrom
gh/raghuramank100/26/head

Conversation

@raghuramank100
Copy link
Contributor

@raghuramank100 raghuramank100 commented Sep 20, 2019

Stack from ghstack:

Differential Revision: D17342622

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)

[ghstack-poisoned]
@pytorchbot pytorchbot added the module: nn Related to torch.nn label Sep 20, 2019
raghuramank100 pushed a commit that referenced this pull request Sep 20, 2019
Pull Request resolved: #26516


ghstack-source-id: 90471845

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)
def test_float_quant_compare_per_channel(self):
# Test for per-channel Quant
torch.manual_seed(67)
myModel = ModelMultipleOps().to(torch.float32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: can you use '_' naming e.g. 'my_model', or maybe just 'model'? so that we have a consistent naming?

# regular QTensor form for serialization. Packed weights should not live
# outside the process in which they were created, rather they should be derived
# from the QTensor weight.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove blank changes?

act_scale, act_zp = activation_observer.calculate_qparams()
assert weight_observer.dtype == torch.qint8, 'Weight observer must have a dtype of qint8'
wt_scale, wt_zp = weight_observer.calculate_qparams()
if weight_observer.qscheme in set([torch.per_tensor_symmetric, torch.per_tensor_affine]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

raghuramank100 pushed a commit that referenced this pull request Sep 27, 2019
Pull Request resolved: #26516


ghstack-source-id: 90895960

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)

def _quantize_weight(float_wt, observer):
wt_scale, wt_zp = observer.calculate_qparams()
if observer.qscheme in {torch.per_tensor_symmetric, torch.per_tensor_affine}:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this can be a list

@@ -4,26 +4,47 @@
from common_quantization import QuantizationTestCase, ModelMultipleOps
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this not in test_quantization.py?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this looks like an integration test

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's chat. test_quantized_models covers tests at a higher level: more complex models and comparison of float with quantized numerics at the model level. I think its better to have it as a separate file as test_quantization is really large already

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe remove test_quantized_model.py and move the code to test_quantization.py? could be in another PR

raghuramank100 pushed a commit that referenced this pull request Sep 27, 2019
Pull Request resolved: #26516


ghstack-source-id: 90926724

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)
@raghuramank100 raghuramank100 added this to the 1.3 milestone Sep 27, 2019
raghuramank100 pushed a commit that referenced this pull request Sep 27, 2019
Pull Request resolved: #26516


ghstack-source-id: 90966545

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 2ccbdb7.

jamesr66a pushed a commit that referenced this pull request Oct 3, 2019
Summary:
Pull Request resolved: #26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
jamesr66a pushed a commit that referenced this pull request Oct 3, 2019
Summary:
Pull Request resolved: #26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
jamesr66a pushed a commit that referenced this pull request Oct 4, 2019
Summary:
Pull Request resolved: #26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
jamesr66a pushed a commit that referenced this pull request Oct 4, 2019
Summary:
Pull Request resolved: #26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
soumith pushed a commit that referenced this pull request Oct 7, 2019
Summary:
Pull Request resolved: #26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
pdlive215 pushed a commit to pdlive215/pytorch that referenced this pull request Nov 27, 2019
Summary:
Pull Request resolved: pytorch#26516

ghstack-source-id: 90982010

Test Plan:
Integrate per-channel support into conv and linear modules.
The following tests pass:
buck test caffe2/test:quantized -- 'test_linear_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_conv_api \(test_quantized_nn_mods\.ModuleAPITest\)' --print-passing-details

buck test caffe2/test:quantized -- 'test_float_quant_compare_per_channel \(test_quantized_models\.ModelNumerics\)' --print-passing-details

Differential Revision: D17342622

fbshipit-source-id: f0d618928e3d9348672c589a6b7a47049c372a2e
xxtEchjovs44 pushed a commit to xxtEchjovs44/pytorch that referenced this pull request Jan 29, 2020
Pull Request resolved: pytorch/pytorch#26516


ghstack-source-id: 90704045

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)
xxtEchjovs44 pushed a commit to xxtEchjovs44/pytorch that referenced this pull request Jan 29, 2020
Pull Request resolved: pytorch/pytorch#26516


ghstack-source-id: 90988292

Differential Revision: [D17342622](https://our.internmc.facebook.com/intern/diff/D17342622/)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants