Skip to content

[BugFix]: Fix typo bug#2980

Merged
maobaolong merged 3 commits intoLMCache:devfrom
princepride:fix-typo-bug
Apr 9, 2026
Merged

[BugFix]: Fix typo bug#2980
maobaolong merged 3 commits intoLMCache:devfrom
princepride:fix-typo-bug

Conversation

@princepride
Copy link
Copy Markdown
Contributor

@princepride princepride commented Apr 8, 2026

What this PR does / why we need it:
I noticed we use hidden_dim_sizes instead of hidden_dim_size, when I run

lmcache describe kvcache --url http://localhost:7555

, I got an error:

[2026-04-08 07:54:07,817] LMCache ERROR: Command failed (main.py:41:lmcache.cli.main)
Traceback (most recent call last):
  File "/proj-tango-pvc/users/zhipeng.wang/workspace/LMCache/lmcache/cli/main.py", line 37, in main
    args.func(args)
  File "/proj-tango-pvc/users/zhipeng.wang/workspace/LMCache/lmcache/cli/commands/describe.py", line 299, in execute
    self._describe_kvcache(args)
  File "/proj-tango-pvc/users/zhipeng.wang/workspace/LMCache/lmcache/cli/commands/describe.py", line 310, in _describe_kvcache
    KVCacheDescriber(metrics, data, base_url).describe()
  File "/proj-tango-pvc/users/zhipeng.wang/workspace/LMCache/lmcache/cli/commands/describe.py", line 114, in describe
    self.add_models()
  File "/proj-tango-pvc/users/zhipeng.wang/workspace/LMCache/lmcache/cli/commands/describe.py", line 215, in add_models
    sec.add("hidden_dim_size", "Hidden dim size", layout["hidden_dim_size"])
                                                  ~~~~~~^^^^^^^^^^^^^^^^^^^
KeyError: 'hidden_dim_size'

After this typo fix, you can got the expect result:

============ LMCache KV Cache Service ============
Health:                                         OK
URL:                         http://localhost:7555
Engine type:                         MPCacheEngine
Chunk size:                                    256
L1 capacity (GB):                           200.00
L1 used (GB):                        31.66 (15.8%)
Eviction policy:                               N/A
Cached objects:                               1309
Active sessions:                                 3
------------- Model: /tmp/GLM-5-FP8 --------------
Model:                              /tmp/GLM-5-FP8
World size:                                      1
GPU IDs: 623435, 623211, 623775, 623672, 623531, 623280, 623352, 623921
Attention backend:                        vLLM MLA
GPU KV shape:                    NL x [NB, BS, HS]
GPU KV tensor shape:          158 x [918, 64, 132]
Num layers:                                    158
Block size:                                     64
Hidden dim size:                        [132, 576]
Dtype:                                 torch.uint8
MLA:                                          True
Num blocks:                                    918
Cache size per token (bytes):               101436
==================================================

BTW, I noticed a lot of hidden_dim_sizes😂


Note

Low Risk
Small CLI-only typo fix with corresponding test updates; minimal impact beyond correcting field extraction and output.

Overview
Fixes a KeyError in lmcache describe kvcache by changing the per-model KV cache layout field from hidden_dim_size to the correct hidden_dim_sizes key, so describe works with the service’s /api/status payload.

Updates the CLI integration test fixture and assertions to expect hidden_dim_sizes instead of hidden_dim_size.

Reviewed by Cursor Bugbot for commit 4bf63af. Bugbot is set up for automated code reviews on this repo. Configure here.

@princepride princepride requested a review from ApostaC as a code owner April 8, 2026 08:18
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request renames the 'hidden_dim_sizes' key to 'hidden_dim_size' in the multiprocess server's status report to ensure compatibility with the CLI. Feedback indicates that a regression test should be added to verify this change and prevent future regressions, as required by the repository's style guide for bug fixes.

Comment thread lmcache/v1/multiprocess/server.py Outdated
"num_layers": ctx.num_layers,
"block_size": ctx.block_size,
"hidden_dim_sizes": str(ctx.hidden_dim_sizes),
"hidden_dim_size": str(ctx.hidden_dim_sizes),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The fix correctly addresses the KeyError by matching the key name expected by the CLI. However, per the repository style guide (line 39), bug fixes should include regression tests. Please add a test case to verify that the report_status output contains the correct hidden_dim_size key to prevent future regressions.

References
  1. Bug fixes include regression tests (link)

@deng451e
Copy link
Copy Markdown
Collaborator

deng451e commented Apr 8, 2026

thanks for catching this bug! it’s probably better to update the key in the cli describe, hidden_dim_size was changed to hidden_dim_sizes (List[int]) in #2926 to support kv_groups.

@maobaolong
Copy link
Copy Markdown
Collaborator

@princepride Yeah, you can patch this to this PR.

Index: lmcache/cli/commands/describe.py
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/lmcache/cli/commands/describe.py b/lmcache/cli/commands/describe.py
--- a/lmcache/cli/commands/describe.py	(revision ee037db2ab35bf42ac12c0be69bf70f9bf578473)
+++ b/lmcache/cli/commands/describe.py	(date 1775630862627)
@@ -212,7 +212,7 @@
             )
             sec.add("num_layers", "Num layers", layout["num_layers"])
             sec.add("block_size", "Block size", layout["block_size"])
-            sec.add("hidden_dim_size", "Hidden dim size", layout["hidden_dim_size"])
+            sec.add("hidden_dim_sizes", "Hidden dim sizes", layout["hidden_dim_sizes"])
             sec.add("dtype", "Dtype", layout["dtype"])
             sec.add("is_mla", "MLA", layout["is_mla"])
             sec.add("num_blocks", "Num blocks", layout["num_blocks"])

@princepride
Copy link
Copy Markdown
Contributor Author

thanks for catching this bug! it’s probably better to update the key in the cli describe, hidden_dim_size was changed to hidden_dim_sizes (List[int]) in #2926 to support kv_groups.

Okay, edited.

Comment thread lmcache/cli/commands/describe.py
@princepride princepride requested a review from hickeyma as a code owner April 9, 2026 06:57
Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 88d2e9d. Configure here.

Comment thread lmcache/cli/commands/describe.py
princepride and others added 3 commits April 9, 2026 15:10
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Align with the rename introduced in LMCache#2926 where hidden_dim_size was
changed to hidden_dim_sizes (List[int]) to support kv_groups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Update test fixture and assertion in test_describe.py to match the
hidden_dim_size -> hidden_dim_sizes rename from LMCache#2926.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>
Copy link
Copy Markdown
Collaborator

@maobaolong maobaolong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM @princepride Thanks for this fix.

@maobaolong maobaolong enabled auto-merge (squash) April 9, 2026 08:40
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Apr 9, 2026
@maobaolong maobaolong merged commit d355c98 into LMCache:dev Apr 9, 2026
34 of 36 checks passed
Oasis-Git pushed a commit to Oasis-Git/LMCache that referenced this pull request Apr 13, 2026
* fix typo bug

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* fix: rename hidden_dim_size to hidden_dim_sizes in describe and server

Align with the rename introduced in LMCache#2926 where hidden_dim_size was
changed to hidden_dim_sizes (List[int]) to support kv_groups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>

* fix: update test fixture to use hidden_dim_sizes key

Update test fixture and assertion in test_describe.py to match the
hidden_dim_size -> hidden_dim_sizes rename from LMCache#2926.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>

---------

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
ftian1 pushed a commit to ftian1/LMCache that referenced this pull request Apr 20, 2026
* fix typo bug

Signed-off-by: princepride <wangzhipeng628@gmail.com>

* fix: rename hidden_dim_size to hidden_dim_sizes in describe and server

Align with the rename introduced in LMCache#2926 where hidden_dim_size was
changed to hidden_dim_sizes (List[int]) to support kv_groups.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>

* fix: update test fixture to use hidden_dim_sizes key

Update test fixture and assertion in test_describe.py to match the
hidden_dim_size -> hidden_dim_sizes rename from LMCache#2926.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: princepride <wangzhipeng628@gmail.com>

---------

Signed-off-by: princepride <wangzhipeng628@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants