Skip to content

adding fused uint4x2_mixed_mm to inductor#106516

Closed
HDCharles wants to merge 9 commits intogh/HDCharles/162/basefrom
gh/HDCharles/162/head
Closed

adding fused uint4x2_mixed_mm to inductor#106516
HDCharles wants to merge 9 commits intogh/HDCharles/162/basefrom
gh/HDCharles/162/head

Conversation

@HDCharles
Copy link
Contributor

@HDCharles HDCharles commented Aug 3, 2023

Stack from ghstack (oldest at bottom):

Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it. note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @ngimel @yf225 @chenyang78 @kadeng @muchulee8 @aakhundov

Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@pytorch-bot
Copy link

pytorch-bot bot commented Aug 3, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/106516

Note: Links to docs will display an error until the docs builds have been completed.

✅ 2 Unrelated Failures

As of commit a49ff9d:

BROKEN TRUNK - The following job failed but were present on the merge base 858b465:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

HDCharles added a commit that referenced this pull request Aug 3, 2023
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 567155d
Pull Request resolved: #106516
Summary:

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng Xia-Weiwen wenzhe-nrv jiayisunx peterbell10 ipiszy ngimel yf225 chenyang78 kadeng muchulee8 aakhundov

[ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues. Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@HDCharles HDCharles changed the title int4x2 WIP adding fused uint4x2_mixed_mm to inductor Aug 3, 2023
@HDCharles HDCharles requested a review from jansel August 3, 2023 17:46
HDCharles added a commit that referenced this pull request Aug 3, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues. Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 488967f
Pull Request resolved: #106516
@HDCharles HDCharles requested review from cpuhrsch and vkuzo August 3, 2023 17:46
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Aug 10, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 6a808e2
Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Aug 10, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 34f1797
Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Aug 11, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: dfbd0ac
Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Aug 14, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: f4c469a
Pull Request resolved: #106516
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
HDCharles added a commit that referenced this pull request Aug 14, 2023
Summary: this is needed for int4 weight-only quantization, we're
matching on the specific unpack operation that unpacks the uint4x2 into
int4's so we can have a fused kernel for it.  note, even if the user
isn't specifically doing this, the two operations are mathematically
equilvanet so it won't cause issues (for some reason int8 bitwise logic
in triton and pytorch doesn't match so that's the only exception). Ideally
at some point full prologue fusion for the mm arguments would be able to
handle this chain but until then, this type of kernel is needed.

Test Plan:

python test/inductor/test_pattern_matcher.py -k "uint4x2"
print test/inductor/test_torchinductor.py -k "uint4x2"

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 282ab42
Pull Request resolved: #106516
@HDCharles
Copy link
Contributor Author

@pytorchmergebot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Aug 15, 2023
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@HDCharles
Copy link
Contributor Author

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@facebook-github-bot facebook-github-bot deleted the gh/HDCharles/162/head branch August 18, 2023 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants