ggml-ci: add run.sh#2877
Merged
ggerganov merged 1 commit intoggml-org:masterfrom Mar 14, 2025
redraskal:ggml-ci
Merged
Conversation
Member
|
Excellent! Let me add @redraskal Sending you a collaborator invite which would allow you to push branches in this repo in order to refine the |
Contributor
Author
|
Cool, I'll take a look at #2454 |
ggerganov
reviewed
Mar 14, 2025
|
|
||
| CMAKE_EXTRA="-DWHISPER_FATAL_WARNINGS=ON" | ||
|
|
||
| if [ ! -z ${GGML_CUDA} ]; then |
Member
There was a problem hiding this comment.
These checks should check for the GG_BUILD_... environment variables (see the llama.cpp script).
For example, this is the environment on the CUDA node:
So we have to check for GG_BUILD_CUDA here instead of GGML_CUDA.
Contributor
Author
There was a problem hiding this comment.
Right, I see what you mean
buxuku
pushed a commit
to buxuku/whisper.cpp
that referenced
this pull request
Mar 26, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Closes #2787
I created a new CI script (ci/run.sh) modeled after the one in llama.cpp.
GGML_TEST_MODELSenv variable (GGML_TEST_MODELS="tiny,base,...") as a comma-separated list, otherwise all models are usedAdded
GG_BUILD_LOW_PERFenv var to limit models to "tiny", "base", and "small" for faster CI on low-perf systems (maybe needs adjustment).What else should be added? Running quantize or verifying binding generation?
The script downloads required models if they don't already exist, storing them in
$MNT/models/.Example output:
GGML_TEST_MODELS="tiny,base" ./ci/run.sh ./tmp/results ./tmp/mnt/tmp/results/README.md: