gguf: add script for converting falcon 180B huggingface safetensors model to gguf#3049
gguf: add script for converting falcon 180B huggingface safetensors model to gguf#3049logicchains wants to merge 1 commit intoggml-org:masterfrom
Conversation
|
@TheBloke please test and verify :) @logicchains yea, having a separate file is not ideal. imo in an ideal wold we would have 1 |
|
The delta between this file and you can take a look at e276e4b to see how we've tried to consolidate this before: |
|
Thanks for the updated script - it does work. Here is a sample run on M2 Ultra: falcon-180b-0.mp4I agree with @akawrykow's suggestion to merge this into |
Also adds Falcon-180B support. Closes ggml-org#3049 Co-authored-by: jb <jonathan.t.barnard@gmail.com>


It's just a slight modification of convert-falcon-hf-to-gguf.py ; not sure if we want to merge the two into one script somehow to avoid duplication.