ggml : fix I8MM Q4_1 scaling factor conversion#10562
Merged
Conversation
ggerganov
commented
Nov 28, 2024
ggml/src/ggml-cpu/ggml-cpu-quants.c
Outdated
Comment on lines
2387
to
2391
Member
Author
There was a problem hiding this comment.
This fixes a bug where the y->d was not converted to F32, resulting in completely wrong numbers when going through this CPU branch.
slaren
reviewed
Nov 28, 2024
ggml/src/ggml-cpu/ggml-cpu-quants.c
Outdated
Member
There was a problem hiding this comment.
We need to remove the ARM runtime feature detection completely, it doesn't work at all and never will. So I would prefer if at least we don't make that task worse by adding more checks like this.
Member
Author
There was a problem hiding this comment.
Ok, I will change the PR to just include the F16 -> F32 fix in the Q4_1 kernel.
46a4ed0 to
1a6a669
Compare
1a6a669 to
5acff8f
Compare
slaren
approved these changes
Nov 29, 2024
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
target #10561
These changes fixillegal instructioncrash on M1 Pro which does not do a runtime check for the availability of I8MM. We now checkggml_cpu_has_matmul_int8()and if it is false, we unpack the 2x2 multiplication into 4 dot products.This fix aside, I am wondering if we should drop the
int nrcsupport in theggml_vec_dotkernels to keep it simple and proceed to implement proper GEMMs similar to the work inggml-cpu-aarch64.c?