Skip to content

[Code Clean] Clean asserts in torch/ao/quantization (root, quantizer, backend_config)#165433

Closed
zhudada0120 wants to merge 2 commits intopytorch:mainfrom
zhudada0120:clean-asserts-quantization-other
Closed

[Code Clean] Clean asserts in torch/ao/quantization (root, quantizer, backend_config)#165433
zhudada0120 wants to merge 2 commits intopytorch:mainfrom
zhudada0120:clean-asserts-quantization-other

Conversation

@zhudada0120
Copy link
Contributor

@zhudada0120 zhudada0120 commented Oct 14, 2025

@pytorch-bot
Copy link

pytorch-bot bot commented Oct 14, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/165433

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit abe388d with merge base fee1ac9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: quantization release notes category label Oct 14, 2025
@zhudada0120
Copy link
Contributor Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label Oct 14, 2025
@zhudada0120 zhudada0120 changed the title [Code Clean] Clean asserts in torch/ao/quantization/~ and torch/ao/quantization/quantizer/ and torch/ao/quantization/backend_config/ [Code Clean] Clean asserts in torch/ao/quantization (root, quantizer, backend_config) Oct 14, 2025
@zhudada0120
Copy link
Contributor Author

cc @fffrog @albanD , PTAL ,thanks.

@fffrog
Copy link
Collaborator

fffrog commented Oct 15, 2025

@zhudada0120 some code conflicts here

@zhudada0120
Copy link
Contributor Author

I’ve resolved the merge conflicts; ready for review. @fffrog

@zhudada0120 zhudada0120 force-pushed the clean-asserts-quantization-other branch from 1feb3a9 to 99525d5 Compare October 16, 2025 06:13
@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 17, 2025
albanD
albanD previously approved these changes Oct 20, 2025
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@albanD
Copy link
Collaborator

albanD commented Oct 20, 2025

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Oct 20, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@zhudada0120 zhudada0120 deleted the clean-asserts-quantization-other branch October 21, 2025 03:48
@zhudada0120 zhudada0120 restored the clean-asserts-quantization-other branch October 21, 2025 03:53
Chao1Han pushed a commit to Chao1Han/pytorch that referenced this pull request Oct 21, 2025
… backend_config) (pytorch#165433)

Replace assert statements with explicit if/raise patterns in:

- torch/ao/quantization/~
- torch/ao/quantization/quantizer/
- torch/ao/quantization/backend_config/

fix partialy pytorch#164878

Pull Request resolved: pytorch#165433
Approved by: https://github.com/albanD
@clee2000
Copy link
Contributor

@pytorchbot revert -m "I think this broke some quantization tests" -c nosignal

The quantization tests only run on a weekly workflow now, I'll add the label to trigger it in OSS
But here's some logs from internal which I assume are going to be the same

FAIL: test_fake_quant_control (test.quantization.core.test_workflow_ops.TestFakeQuantizeOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".../test/quantization/core/test_workflow_ops.py", line 622, in test_fake_quant_control
    _LearnableFakeQuantize.with_args(observer=MovingAverageMinMaxObserver, quant_min=0,
  File "...torch/ao/quantization/observer.py", line 88, in __call__
    return self.p(*args, **keywords)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...torch/ao/quantization/_learnable_fake_quantize.py", line 67, in __init__
    raise AssertionError("quant_min out of bound")
AssertionError: quant_min out of bound

======================================================================
FAIL: test_fq_serializable_per_tensor (test.quantization.core.test_workflow_ops.TestFakeQuantizeOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".../test/quantization/core/test_workflow_ops.py", line 602, in test_fq_serializable_per_tensor
    fq_module = FakeQuantizeClass(observer, quant_min, quant_max)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...torch/ao/quantization/_learnable_fake_quantize.py", line 67, in __init__
    raise AssertionError("quant_min out of bound")
AssertionError: quant_min out of bound

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

pytorchmergebot added a commit that referenced this pull request Oct 21, 2025
…antizer, backend_config) (#165433)"

This reverts commit df64c0c.

Reverted #165433 on behalf of https://github.com/clee2000 due to I think this broke some quantization tests ([comment](#165433 (comment)))
@pytorchmergebot
Copy link
Collaborator

@zhudada0120 your PR has been successfully reverted.

@pytorch-bot pytorch-bot bot added module: cpu CPU specific problem (e.g., perf, algorithm) and removed ciflow/quantization-periodic labels Oct 22, 2025
@zhudada0120
Copy link
Contributor Author

zhudada0120 commented Oct 22, 2025

@pytorchbot revert -m "I think this broke some quantization tests" -c nosignal

The quantization tests only run on a weekly workflow now, I'll add the label to trigger it in OSS But here's some logs from internal which I assume are going to be the same

FAIL: test_fake_quant_control (test.quantization.core.test_workflow_ops.TestFakeQuantizeOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".../test/quantization/core/test_workflow_ops.py", line 622, in test_fake_quant_control
    _LearnableFakeQuantize.with_args(observer=MovingAverageMinMaxObserver, quant_min=0,
  File "...torch/ao/quantization/observer.py", line 88, in __call__
    return self.p(*args, **keywords)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...torch/ao/quantization/_learnable_fake_quantize.py", line 67, in __init__
    raise AssertionError("quant_min out of bound")
AssertionError: quant_min out of bound

======================================================================
FAIL: test_fq_serializable_per_tensor (test.quantization.core.test_workflow_ops.TestFakeQuantizeOps)
----------------------------------------------------------------------
Traceback (most recent call last):
  File ".../test/quantization/core/test_workflow_ops.py", line 602, in test_fq_serializable_per_tensor
    fq_module = FakeQuantizeClass(observer, quant_min, quant_max)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "...torch/ao/quantization/_learnable_fake_quantize.py", line 67, in __init__
    raise AssertionError("quant_min out of bound")
AssertionError: quant_min out of bound

@clee2000 Sorry for the trouble! The issue has been fixed. Let me know if there are any further issues!

@zhudada0120
Copy link
Contributor Author

Hi @albanD , sorry about this! Due to a mistake in my Git operations, some irrelevant commits got included in the PR. I’ve already fixed the issue and will be extra careful moving forward. Could you please help clean up the automatically added reviewers and labels? Sorry again for the trouble!

@fffrog
Copy link
Collaborator

fffrog commented Oct 23, 2025

Have checked all the changes, which seems good to me.

Wait for CI passed first

@zhudada0120
Copy link
Contributor Author

cc @fffrog , I have resolved the code conflicts and need to re-trigger the CI. Thanks

@cyyever
Copy link
Collaborator

cyyever commented Oct 28, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@zhudada0120
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 46 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@albanD
Copy link
Collaborator

albanD commented Oct 30, 2025

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased clean-asserts-quantization-other onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout clean-asserts-quantization-other && git pull --rebase)

@albanD
Copy link
Collaborator

albanD commented Oct 30, 2025

Ho looks like there are some merge conflicts now.

@cyyever
Copy link
Collaborator

cyyever commented Oct 31, 2025

@pytorchbot rebase -b main

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Rebase failed due to Command git -C /home/runner/work/pytorch/pytorch rebase refs/remotes/origin/main pull/165433/head returned non-zero exit code 1

Rebasing (1/2)
Auto-merging torch/ao/quantization/observer.py
Auto-merging torch/ao/quantization/quantizer/x86_inductor_quantizer.py
CONFLICT (content): Merge conflict in torch/ao/quantization/quantizer/x86_inductor_quantizer.py
error: could not apply 21fdb3433f8... [Code Clean] Clean asserts in torch/ao/quantization/*.py and torch/ao/quantization/quantizer/* and torch/ao/quantization/backend_config/*
hint: Resolve all conflicts manually, mark them as resolved with
hint: "git add/rm <conflicted_files>", then run "git rebase --continue".
hint: You can instead skip this commit: run "git rebase --skip".
hint: To abort and get back to the state before "git rebase", run "git rebase --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Could not apply 21fdb3433f8... # [Code Clean] Clean asserts in torch/ao/quantization/*.py and torch/ao/quantization/quantizer/* and torch/ao/quantization/backend_config/*

Raised by https://github.com/pytorch/pytorch/actions/runs/18961883926

@cyyever
Copy link
Collaborator

cyyever commented Oct 31, 2025

There are still conflicts with main.

@KarhouTam
Copy link
Contributor

Hi, @zhudada0120. You can run git pull upstream main --rebase to solve conflicts locally.

@zhudada0120
Copy link
Contributor Author

The conflict has been resolved.

…/quantization/quantizer/* and torch/ao/quantization/backend_config/*
…/quantization/quantizer/* and torch/ao/quantization/backend_config/*
@zhudada0120
Copy link
Contributor Author

Hi, @cyyever @albanD, I think this PR is ready to be merged. Please take a look. Thanks.

@fffrog
Copy link
Collaborator

fffrog commented Nov 3, 2025

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-no-td Do not run TD on this PR ciflow/trunk Trigger trunk jobs on your pull request Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: dynamo module: inductor oncall: distributed Add this issue/PR to distributed oncall triage queue open source release notes: distributed (checkpoint) release notes: inductor (aoti) release notes: quantization release notes category Reverted topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants