Add parameter_name support to _float8_dynamic_activation_int4_weight_transform#3902
Merged
Add parameter_name support to _float8_dynamic_activation_int4_weight_transform#3902
Conversation
This config was deprecated in favor of Float8DynamicActivationFloat8WeightConfig with packing_format=Float8PackingFormat.SPARSE_CUTLASS and granularity=PerRow(). Remove the class definition, handler, and all references from imports, tests, and benchmarks. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
This config was deprecated in favor of Int8DynamicActivationIntxWeightConfig. Remove the class definition, handler, and all references from imports, tests, QAT code, benchmarks, and documentation. Update QAT docs to reference Int4WeightOnlyConfig as the example base config. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
This config was deprecated and scheduled for deletion. Remove the class definition, handler, and all references from imports, tests, benchmarks, and documentation. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
Remove the config class, its supporting classes (Float8ObservedLinear, Float8ObservedSoftmax, Float8QuantizedSoftmax), the handler function, and all references from imports and tests. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
This config was deprecated and scheduled for deletion. Remove the class definition, handler, and all references from imports, tests, benchmarks, and the autoround eval script. This also removes the entire BC import block from quant_api.py since all prototype configs have been removed. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
This config was deprecated and scheduled for deletion. Remove the class definition, handler, and all references from imports, tests, benchmarks, and the autoround eval script. This also removes the entire BC import block from quant_api.py since all prototype configs have been removed. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
This config was deprecated and scheduled for deletion. Remove the class definition, handler, and all references from imports, tests, benchmarks, and the autoround eval script. This also removes the entire BC import block from quant_api.py since all prototype configs have been removed. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
Summary: This PR removes CUSTOM_PARAM_QUANTIZATION_SUPPORTED_CONFIGS, in favor of using `inspect.signature` to ensure that the given handler has a parameter_name kwarg we can use to pass in the param fqn. Test Plan: ``` pytest test/quantization/test_quant_api -k fqn ``` Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
…transform Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragent@cursor.com> [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3902
Note: Links to docs will display an error until the docs builds have been completed. ⏳ 1 Pending, 1 Unrelated FailureAs of commit 8f82cc5 with merge base d1fa9a2 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This was referenced Feb 17, 2026
This was referenced Feb 17, 2026
…mic_activation_int4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…nt4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…mic_activation_int4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…nt4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…mic_activation_int4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…nt4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…mic_activation_int4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…nt4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…mic_activation_int4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
…nt4_weight_transform" Enable parameter-level quantization by accepting a parameter_name kwarg, using getattr/setattr instead of hard-coded module.weight, and switching to _module_extra_repr with partial for flexible repr. Co-authored-by: Cursor <cursoragentcursor.com> [ghstack-poisoned]
danielvegamyhre
approved these changes
Feb 24, 2026
3 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):
Enable parameter-level quantization by accepting a parameter_name kwarg,
using getattr/setattr instead of hard-coded module.weight, and switching
to _module_extra_repr with partial for flexible repr.
Co-authored-by: Cursor cursoragent@cursor.com