Skip to content

[Doc] Add doc for LMCache MP mode operator#2731

Merged
ApostaC merged 3 commits intoLMCache:devfrom
royyhuang:doc/lmcache-operator
Mar 13, 2026
Merged

[Doc] Add doc for LMCache MP mode operator#2731
ApostaC merged 3 commits intoLMCache:devfrom
royyhuang:doc/lmcache-operator

Conversation

@royyhuang
Copy link
Copy Markdown
Contributor

What this PR does / why we need it:
Adding documentation on how to use LMCache K8s operator with MP mode.

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: royyhuang <roy.y.huang@gmail.com>
Signed-off-by: royyhuang <roy.y.huang@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces extensive documentation for the LMCache Kubernetes operator, designed to streamline the deployment and management of LMCache multiprocess servers. The new guide explains how the operator simplifies complex Kubernetes configurations through a custom resource, offering benefits such as automated resource provisioning, simplified service discovery, and robust validation. It serves as a complete resource for users looking to leverage the operator for efficient LMCache deployments.

Highlights

  • New Documentation for LMCache Kubernetes Operator: Added a comprehensive guide for deploying and managing LMCache multiprocess servers using the Kubernetes operator.
  • Automated Deployment: Explained how the operator automates the deployment and lifecycle management of LMCache servers via a single LMCacheEngine custom resource.
  • Key Benefits: Detailed the advantages of using the operator, including auto-injected pod settings, node-local service discovery, auto-computed resource sizing, declarative Prometheus integration, and CRD validation.
  • Usage Instructions: Provided step-by-step guides for installing the operator, deploying an LMCacheEngine, and connecting vLLM.
  • CRD Reference and Examples: Included a full reference for the LMCacheEngine CRD specification and practical examples for various configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • docs/source/mp/index.rst
    • Added 'operator' to the multiprocess documentation index.
  • docs/source/mp/operator.rst
    • Introduced new documentation for the LMCache Kubernetes operator, covering its features, installation, usage, and CRD reference.
Activity
  • The author indicated that this PR contains user-facing changes, specifically new documentation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces excellent and comprehensive documentation for the LMCache Kubernetes operator. The new documentation is well-structured, covering everything from installation and configuration to production best practices. My review includes a few suggestions to improve clarity and promote best practices, particularly regarding production configurations for container images and Prometheus monitoring.

Comment on lines +98 to +100
The operator defaults the container image to ``lmcache/vllm-openai:latest``.
Override with ``spec.image.repository`` and ``spec.image.tag`` to pin a
specific version.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The note explains that the default image can be overridden but doesn't clarify why a user should consider changing the repository. The default lmcache/vllm-openai image is large as it contains both vLLM and LMCache. For LMCache server pods in production, a smaller, dedicated image like lmcache/standalone is more appropriate. Please expand this note to provide this guidance for production deployments.

Suggested change
The operator defaults the container image to ``lmcache/vllm-openai:latest``.
Override with ``spec.image.repository`` and ``spec.image.tag`` to pin a
specific version.
The operator defaults the container image to ``lmcache/vllm-openai:latest``.
For production, consider overriding ``spec.image.repository`` to a dedicated
image like ``lmcache/standalone`` and pin a specific ``spec.image.tag``.

hostIPC: true
containers:
- name: vllm
image: lmcache/vllm-openai:latest
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The example for connecting vLLM uses the :latest image tag, which is not recommended for production environments as it can lead to unexpected behavior when the image is updated. To promote best practices and ensure reproducibility, it's better to use a specific version tag. The production example later in this document uses v0.1.0, which would be a good choice here as well.

Suggested change
image: lmcache/vllm-openai:latest
image: lmcache/vllm-openai:v0.1.0

Comment on lines +568 to +570
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The production example enables the ServiceMonitor and also adds prometheus.io/scrape pod annotations. This is redundant. When using the Prometheus Operator, the ServiceMonitor is the idiomatic way to configure scraping, and it targets the metrics service created by the LMCache operator. The pod annotations are for a different discovery mechanism (Prometheus's native Kubernetes SD). Including both can be confusing. To simplify and align with best practices for the Prometheus Operator, I recommend removing the podAnnotations.

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC enabled auto-merge (squash) March 12, 2026 22:35
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 12, 2026
Copy link
Copy Markdown
Contributor

@KuntaiDu KuntaiDu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC merged commit 730cb71 into LMCache:dev Mar 13, 2026
23 of 25 checks passed
hyunyul-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Mar 20, 2026
* [doc] add mp operator documentation

* [doc] add operator to the index after merge

Signed-off-by: royyhuang <roy.y.huang@gmail.com>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
* [doc] add mp operator documentation

* [doc] add operator to the index after merge

Signed-off-by: royyhuang <roy.y.huang@gmail.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* [doc] add mp operator documentation

* [doc] add operator to the index after merge

Signed-off-by: royyhuang <roy.y.huang@gmail.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* [doc] add mp operator documentation

* [doc] add operator to the index after merge

Signed-off-by: royyhuang <roy.y.huang@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants