[WOQ][Inductor] Enable CUDA coverage for _weight_int8pack_mm#163461
[WOQ][Inductor] Enable CUDA coverage for _weight_int8pack_mm#163461bbeckca wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/163461
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e449d65 with merge base 9d0d98a ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@bbeckca It appears that a CUDA implementation of _weight_int8pack_mm exists. Would you be able to provide the associated pull request? Thank you. |
Sounds good. Added reference to test plan. |
08d289d to
a0db88f
Compare
Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
a0db88f to
3eac96f
Compare
Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
3eac96f to
66de88e
Compare
…#163461) Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
66de88e to
62a7219
Compare
…#163461) Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
62a7219 to
562a3b3
Compare
…#163461) Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
562a3b3 to
e834f94
Compare
…#163461) Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
…#163461) Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Reviewed By: jerryzh168 Differential Revision: D82926440
e834f94 to
e449d65
Compare
|
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Summary: What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py as the kernel was added by #159325. Why: Confirm CUDA backend for _weight_int8pack_mm is registered. Test Plan: ``` buck2 test 'fbcode//mode/opt' fbcode//caffe2/test/inductor:test_inductor_cuda ``` https://www.internalfb.com/intern/testinfra/testrun/2533275104869494 Differential Revision: D82926440 Pull Request resolved: #163461 Approved by: https://github.com/jerryzh168
Summary:
What: Unskip the CUDA path for test_int8_weight_only_quant in test_torchinductor.py as the kernel was added by #159325.
Why: Confirm CUDA backend for _weight_int8pack_mm is registered.
Test Plan:
https://www.internalfb.com/intern/testinfra/testrun/2533275104869494
Differential Revision: D82926440
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov @coconutruben