bug fix: handle saving/loading null layers in recurrent memory#14675
Merged
ggerganov merged 3 commits intoggml-org:masterfrom Jul 23, 2025
Merged
bug fix: handle saving/loading null layers in recurrent memory#14675ggerganov merged 3 commits intoggml-org:masterfrom
ggerganov merged 3 commits intoggml-org:masterfrom
Conversation
handle saving/loading null layers in recurrent memory
ggerganov
reviewed
Jul 14, 2025
compilade
approved these changes
Jul 14, 2025
Collaborator
compilade
left a comment
There was a problem hiding this comment.
Thanks @l3utterfly! I've tested this with a Jamba model and llama-save-load-state, and it was indeed failing before, and is fixed by this change.
I'll add a test case to #14139 (once I also add variants for hybrid models) to help automatically detecting this kind of regression with hybrid architectures in the future.
CISC
reviewed
Jul 14, 2025
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
gabe-l-hart
added a commit
to gabe-l-hart/llama.cpp
that referenced
this pull request
Jul 23, 2025
* origin/master: (49 commits) ci : correct label refactor->refactoring (ggml-org#14832) CUDA: fix quantized KV cache + multiple sequences (ggml-org#14822) tests : add non-cont K,V FA tests memory : handle saving/loading null layers in recurrent memory (ggml-org#14675) ggml: fix loongarch quantize_row_q8_1 error (ggml-org#14827) CANN: weight format to NZ for Ascend310P3 (ggml-org#14407) CUDA: add fused rms norm (ggml-org#14800) ggml : model card yaml tab->2xspace (ggml-org#14819) vulkan: fix rms_norm_mul to handle broadcasting dim0 (ggml-org#14817) llama : add model type detection for rwkv7 7B&14B (ggml-org#14816) imatrix: add option to display importance score statistics for a given imatrix file (ggml-org#12718) Mtmd: add a way to select device for vision encoder (ggml-org#14236) cuda : implement bf16 cpy ops and enable bf16 cont (ggml-org#14763) opencl: remove unreachable `return` (ggml-org#14806) server : allow setting `--reverse-prompt` arg (ggml-org#14799) cuda: remove linking to cublasLt (ggml-org#14790) opencl: fix `im2col` when `KW!=KH` (ggml-org#14803) opencl: add conv2d kernel (ggml-org#14403) sycl: Fix im2col (ggml-org#14797) kleidiai: add support for get_rows (ggml-org#14676) ...
taronaeo
pushed a commit
to taronaeo/llama.cpp-s390x
that referenced
this pull request
Jul 25, 2025
…org#14675) * Update llama-memory-recurrent.cpp handle saving/loading null layers in recurrent memory * fixed styling issues and updated comments * fix styling issue Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
blime4
referenced
this pull request
in blime4/llama.cpp
Feb 5, 2026
* Update llama-memory-recurrent.cpp handle saving/loading null layers in recurrent memory * fixed styling issues and updated comments * fix styling issue Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Currently saving loading kv cache of recurrent memory crashes because layers can be null.
This mainly applies to the new LiquidAI/LFM2 models.
Tested with: https://huggingface.co/LiquidAI/LFM2-350M-GGUF