[CLI]: Server command#2836
Conversation
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the LMCache server launch functionality directly into the command-line interface. By introducing a dedicated Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a new server command to the lmcache CLI, providing a user-friendly way to launch the LMCache server. The implementation correctly wraps the existing server logic. My review focuses on adherence to the project's style guide. The main points are the need for tests for this new feature and the addition of docstrings for public methods in the new command class, both of which are required by the style guide.
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
| storage_manager_config=parse_args_to_config(args), | ||
| prometheus_config=parse_args_to_prometheus_config(args), | ||
| telemetry_config=parse_args_to_telemetry_config(args), | ||
| ) |
There was a problem hiding this comment.
Server command inherits unused metrics output flags
Low Severity
ServerCommand extends BaseCommand, whose register() unconditionally calls _add_output_args(parser), adding --format and --output flags. However, execute() never calls create_metrics() and never reads these attributes — it only delegates to run_http_server. These dead flags will appear in lmcache server -h, misleading users into thinking they have an effect on a long-running server process.
Triggered by project rule: LMCache Code Review Style Guide
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
| lmcache mock --name my-run --num-items 5 | ||
|
|
||
| # Launch the LMCache server (ZMQ + HTTP) | ||
| lmcache server --engine-type blend --host 0.0.0.0 --port 5555 |
There was a problem hiding this comment.
Why we have --engine-type blend here?
Removed engine-type option from server command.
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 2 total unresolved issues (including 1 from previous review).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
* initial commit Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix imports Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * add unit tests and doc strings Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * update documentation to use lmcache server Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix non CUDA UT Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix UT stub Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * Update CLI command by removing engine-type option Removed engine-type option from server command. --------- Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
* initial commit Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix imports Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * add unit tests and doc strings Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * update documentation to use lmcache server Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix non CUDA UT Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix UT stub Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * Update CLI command by removing engine-type option Removed engine-type option from server command. --------- Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
* initial commit Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix imports Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * add unit tests and doc strings Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * update documentation to use lmcache server Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix non CUDA UT Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix UT stub Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * Update CLI command by removing engine-type option Removed engine-type option from server command. --------- Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
* initial commit Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix imports Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * add unit tests and doc strings Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * update documentation to use lmcache server Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix non CUDA UT Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * fix UT stub Signed-off-by: Samuel Shen <slshen@tensormesh.ai> * Update CLI command by removing engine-type option Removed engine-type option from server command. --------- Signed-off-by: Samuel Shen <slshen@tensormesh.ai>


lmcache serverwrapspython -m lmcache.v1.multiprocess.http_serverNote
Low Risk
Adds a new CLI entrypoint and documentation updates without modifying underlying server/runtime logic; main risk is mismatched/incorrect CLI wiring or defaults causing launch failures.
Overview
Adds a new
lmcache serversubcommand that composes existing MP server/storage/HTTP frontend/Prometheus/telemetry argparse helpers and callsrun_http_server()with configs parsed from CLI args.Registers the new command in the CLI command registry, adds unit tests verifying argument registration/defaults and that
execute()invokesrun_http_server, and updates CLI/MP docs to referencelmcache serverinstead ofpython -m lmcache.v1.multiprocess.http_server(including quickstart/deployment/config examples and command listings).Written by Cursor Bugbot for commit 5c72a69. This will update automatically on new commits. Configure here.