fake_quant: fix device affinity and buffer resizing for state_dict#50868
Closed
vkuzo wants to merge 2 commits intogh/vkuzo/213/basefrom
Closed
fake_quant: fix device affinity and buffer resizing for state_dict#50868vkuzo wants to merge 2 commits intogh/vkuzo/213/basefrom
vkuzo wants to merge 2 commits intogh/vkuzo/213/basefrom
Conversation
Summary: Ensures that `FakeQuantize` respects device affinity when loading from state_dict, and knows how to resize scale and zero_point values (which is necessary for FQ classes wrapping per channel observers). This is same as #44537, but for `FakeQuantize`. Test Plan: ``` python test/test_quantization.py TestObserver.test_state_dict_respects_device_affinity ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
vkuzo
added a commit
that referenced
this pull request
Jan 21, 2021
Summary: Ensures that `FakeQuantize` respects device affinity when loading from state_dict, and knows how to resize scale and zero_point values (which is necessary for FQ classes wrapping per channel observers). This is same as #44537, but for `FakeQuantize`. Test Plan: ``` python test/test_quantization.py TestObserver.test_state_dict_respects_device_affinity ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 737eefd Pull Request resolved: #50868
jerryzh168
reviewed
Jan 21, 2021
jerryzh168
reviewed
Jan 21, 2021
…ate_dict" Summary: Ensures that `FakeQuantize` respects device affinity when loading from state_dict, and knows how to resize scale and zero_point values (which is necessary for FQ classes wrapping per channel observers). This is same as #44537, but for `FakeQuantize`. Test Plan: ``` python test/test_quantization.py TestObserver.test_state_dict_respects_device_affinity ``` Reviewers: Subscribers: Tasks: Tags: Differential Revision: [D25991570](https://our.internmc.facebook.com/intern/diff/D25991570) [ghstack-poisoned]
vkuzo
added a commit
that referenced
this pull request
Jan 25, 2021
Summary: Ensures that `FakeQuantize` respects device affinity when loading from state_dict, and knows how to resize scale and zero_point values (which is necessary for FQ classes wrapping per channel observers). This is same as #44537, but for `FakeQuantize`. Test Plan: ``` python test/test_quantization.py TestObserver.test_state_dict_respects_device_affinity ``` Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 317bd8b Pull Request resolved: #50868
Codecov Report
@@ Coverage Diff @@
## gh/vkuzo/213/base #50868 +/- ##
=====================================================
- Coverage 80.91% 80.91% -0.01%
=====================================================
Files 1926 1926
Lines 210014 210022 +8
=====================================================
+ Hits 169942 169943 +1
- Misses 40072 40079 +7 |
Contributor
|
This pull request has been merged in f8eefbd. |
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
…ytorch#50868) Summary: Pull Request resolved: pytorch#50868 Ensures that `FakeQuantize` respects device affinity when loading from state_dict, and knows how to resize scale and zero_point values (which is necessary for FQ classes wrapping per channel observers). This is same as pytorch#44537, but for `FakeQuantize`. Test Plan: ``` python test/test_quantization.py TestObserver.test_state_dict_respects_device_affinity ``` Imported from OSS Reviewed By: jerryzh168 Differential Revision: D25991570 fbshipit-source-id: 1193a6cd350bddabd625aafa0682e2e101223bb1
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
Summary:
Ensures that
FakeQuantizerespects device affinity when loading fromstate_dict, and knows how to resize scale and zero_point values
(which is necessary for FQ classes wrapping per channel observers).
This is same as #44537, but for
FakeQuantize.Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: D25991570