adding fused uint4x2_mixed_mm to inductor#106516
adding fused uint4x2_mixed_mm to inductor#106516HDCharles wants to merge 9 commits intogh/HDCharles/162/basefrom
Conversation
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106516
Note: Links to docs will display an error until the docs builds have been completed. ✅ 2 Unrelated FailuresAs of commit a49ff9d: BROKEN TRUNK - The following job failed but were present on the merge base 858b465:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues. Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues. Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 488967f Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 6a808e2 Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 34f1797 Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: dfbd0ac Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: f4c469a Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: [ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're matching on the specific unpack operation that unpacks the uint4x2 into int4's so we can have a fused kernel for it. note, even if the user isn't specifically doing this, the two operations are mathematically equilvanet so it won't cause issues (for some reason int8 bitwise logic in triton and pytorch doesn't match so that's the only exception). Ideally at some point full prologue fusion for the mm arguments would be able to handle this chain but until then, this type of kernel is needed. Test Plan: python test/inductor/test_pattern_matcher.py -k "uint4x2" print test/inductor/test_torchinductor.py -k "uint4x2" Reviewers: Subscribers: Tasks: Tags: ghstack-source-id: 282ab42 Pull Request resolved: #106516
|
@pytorchmergebot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
|
@pytorchmergebot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Stack from ghstack (oldest at bottom):
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it. note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.
Test Plan:
python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"
Reviewers:
Subscribers:
Tasks:
Tags:
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov