Skip to content

Initial support for CMake#75

Merged
ggerganov merged 1 commit intoggml-org:masterfrom
etra0:add_cmake
Mar 13, 2023
Merged

Initial support for CMake#75
ggerganov merged 1 commit intoggml-org:masterfrom
etra0:add_cmake

Conversation

@etra0
Copy link
Contributor

@etra0 etra0 commented Mar 13, 2023

This is the draft to support llama.cpp being built by CMake. Most of the things were copied from whisper.cpp's CMake so it should be pretty identical.

This currently builds on my M1 Air with the same perf as using the makefile, but it currently doesn't builds on windows because of #74. Doing the revert of said commit makes it compile on Windows as well.

This currently works both on my M1 Air and on my Windows 10 machine.

@etra0 etra0 marked this pull request as draft March 13, 2023 02:33
@ggerganov
Copy link
Member

Excellent! Why is it "Draft"?

@etra0
Copy link
Contributor Author

etra0 commented Mar 13, 2023

Woops, sorry, ready for review now

@etra0 etra0 marked this pull request as ready for review March 13, 2023 17:05
@ggerganov ggerganov merged commit ed6849c into ggml-org:master Mar 13, 2023
@etra0 etra0 deleted the add_cmake branch March 13, 2023 17:56
ggerganov added a commit that referenced this pull request Mar 13, 2023
blackhole89 pushed a commit that referenced this pull request Mar 15, 2023
SamuelOliveirads pushed a commit to SamuelOliveirads/llama.cpp that referenced this pull request Dec 29, 2025
When I changed iqk_mul_mat to use type-1 dot products for type-0
legacy quants, I forgot to also change the vec_dot_type when
the dot product is done via ggml as in flash attention.
This commit fixes it.

Co-authored-by: Iwan Kawrakow <iwan.kawrakow@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants