Fix: Safely handle layerwise cache shape dimensions in remote backend#2751
Fix: Safely handle layerwise cache shape dimensions in remote backend#2751deng451e merged 37 commits intoLMCache:devfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a compatibility issue in the remote backend where 3D shapes used for layerwise caching needed to be adapted to a 4D protocol. The solution introduces explicit type checking for layerwise cache keys and helper functions to safely pad and unpad tensor shapes during data transmission and reception, ensuring correct data handling without relying on potentially ambiguous shape heuristics. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly implements the padding and unpadding of tensor shapes for layerwise caching in the remote backend. By explicitly checking the key type (LayerCacheEngineKey), the changes avoid potential issues with shape-based heuristics. The new helper functions are well-defined, and their integration into LMCServerConnector's put and get methods is sound. My review includes one suggestion to improve code structure by encapsulating the new helper functions within the class that uses them.
6ab791d to
f658119
Compare
Signed-off-by: Tony Lin <tony.lin@intel.com>
d5372b2 to
481aef3
Compare
Signed-off-by: Tony Lin <tony.lin@intel.com>
Signed-off-by: Tony Lin <tony.lin@intel.com>
|
@cursor review |
- Fix TensorMemoryObj.get_size() to use raw_data actual size instead of group_prefix_sum[-1], preventing out-of-bounds memory access when byte_array is called after reshape_partial_chunk truncates raw_data. group_prefix_sum is preserved for use by get_tensor(index). - Refactor _pad_shape_to_4d and _strip_shape_padding from private static methods of RemoteMetadata to module-level functions (pad_shape_to_4d, strip_shape_padding), eliminating cross-class access to private methods from ClientMetaMessage and ServerMetaMessage.
Signed-off-by: Tony Lin <tony.lin@intel.com>
|
|
||
| num_tokens = len(slot_mapping_full) | ||
|
|
||
| mem_fmt = MemoryFormat.KV_MLA_FMT if self.use_mla else MemoryFormat.KV_T2D |
|
Hello Tony, Thanks for the work. |
Signed-off-by: Tony Lin <tony.lin@intel.com>
hi @DongDongJu thank you for pointing this out. the code logic was refined several times, but docstring wasn't updated accordingly. i fixed it in latest commit. |
| else: | ||
| # Layerwise 3D: [num_tokens, 2, hidden_dim] | ||
| # Layerwise MLA 2D: [num_tokens, hidden_dim] | ||
| token_dim = 0 |
There was a problem hiding this comment.
Token dimension determined by shape length, not format
Medium Severity
reshape_partial_chunk infers token_dim solely from the number of shape dimensions rather than consulting the memory format. For any 3D shape (non-4D), it assumes token_dim = 0. This is correct for KV_2LTD ([num_tokens, 2, hidden_dim]) and MLA 2D, but wrong for KV_T2D ([2, num_tokens, hidden_dim]) where the token dimension is 1. The MemoryFormat.KV_T2D.token_dim() method confirms it returns 1. Using memory_obj.meta.fmt.token_dim() instead of branching on shape length would be safer and consistent with how other code resolves token positions.
Reviewed by Cursor Bugbot for commit d946696. Configure here.
Head branch was pushed to by a user without write access
Signed-off-by: Tony Lin <tony.lin@intel.com>
Signed-off-by: Tony Lin <tony.lin@intel.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit bab0a7d. Configure here.
Signed-off-by: Tony Lin <tony.lin@intel.com>


fix #2752
make sure to pad layerwise kv shape to 4D and strip it correctly in remote backend
The PR is to fix below error msg when layerwise=True in remote backend
(EngineCore_DP0 pid=45164) [2026-03-10 14:36:14,376] LMCache ERROR: Put task failed for key LayerCacheEngineKey(model_name='/workspace/Meta-Llama-3-8B-Instruct/', world_size=1, worker_id=0, chunk_hash=4132912831621080023, dtype=torch.bfloat16, request_configs=None, tags=None, _dtype_str='bfloat16', layer_id=8): Shape dimension should be 4 (remote_backend.py:196:lmcache.v1.storage_backend.remote_backend)
(EngineCore_DP0 pid=45164) [2026-03-10 14:36:14,377] LMCache ERROR: Put task failed for key LayerCacheEngineKey(model_name='/workspace/Meta-Llama-3-8B-Instruct/', world_size=1, worker_id=0, chunk_hash=4132912831621080023, dtype=torch.bfloat16, request_configs=None, tags=None, _dtype_str='bfloat16', layer_id=9): Shape dimension should be 4 (remote_backend.py:196:lmcache.v1.storage_backend.remote_backend)
Note
Medium Risk
Touches wire-format shape serialization and byte sizing used by remote put/get; mistakes could corrupt data or break backward compatibility for existing remote caches. Changes are localized and covered by new unit tests for sub-4D shape round-trips.
Overview
Fixes remote-backend serialization and partial-chunk handling to work with layerwise KV caches and vLLM MLA.
Remote protocol messages (
RemoteMetadata,ClientMetaMessage,ServerMetaMessage) now pad sub-4D shapes to 4 integers on write and strip trailing-zero padding on read, avoiding failures when layerwise caches are 2D/3D. Remote connector partial reads now compute the token dimension correctly for both 4D and layerwise shapes and also updates internal size accounting soget_size()/byte_arraymatch the truncated payload.Separately,
TensorMemoryObj.byte_arraynow uses the logical size (get_size()) instead of the raw buffer size to avoid leaking allocator padding, and the vLLM layerwise GPU connector relaxes KV-format assertions/allocations to support MLA (KV_MLA_FMTvsKV_T2D).Reviewed by Cursor Bugbot for commit c19ec92. Bugbot is set up for automated code reviews on this repo. Configure here.