Merged
Conversation
KerfuffleV2
approved these changes
Oct 14, 2023
Contributor
KerfuffleV2
left a comment
There was a problem hiding this comment.
This looks pretty straightforward. Tested, seems to work (with ROCM even). The text generates much faster when offloading, as expected.
"The image features a white fox sitting on the ground, with its mouth wide open, possibly yawning or growling. The fox appears to be in a forest setting, surrounded by grass and trees. The scene is depicted in a black and white style, giving it a classic and timeless feel." About my profile picture. It's supposed to be a wolf cub, but still pretty impressive!
joelkuiper
added a commit
to vortext/llama.cpp
that referenced
this pull request
Oct 19, 2023
* 'master' of github.com:ggerganov/llama.cpp: fix embeddings when using CUDA (ggml-org#3657) llama : avoid fprintf in favor of LLAMA_LOG (ggml-org#3538) readme : update hot-topics & models, detail windows release in usage (ggml-org#3615) CLBlast: Fix temporary buffer size for f16 conversion (wsize) train-text-from-scratch : fix assert failure in ggml-alloc (ggml-org#3618) editorconfig : remove trailing spaces server : documentation of JSON return value of /completion endpoint (ggml-org#3632) save-load-state : fix example + add ci test (ggml-org#3655) readme : add Aquila2 links (ggml-org#3610) tokenizer : special token handling (ggml-org#3538) k-quants : fix quantization ranges (ggml-org#3646) llava : fix tokenization to not add bos between image embeddings and user prompt (ggml-org#3645) MPT : support GQA for replit-code-v1.5 (ggml-org#3627) Honor -ngl option for Cuda offloading in llava (ggml-org#3621)
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
closes #3616
I simply forgot to set
n_gpu_layerswhen loading the model. This should fix it.