Skip to content

[CI]: Fix the LMCache random throughput being higher than native vllm#2864

Merged
sammshen merged 2 commits intoLMCache:devfrom
sammshen:fix-mp-correctness
Mar 26, 2026
Merged

[CI]: Fix the LMCache random throughput being higher than native vllm#2864
sammshen merged 2 commits intoLMCache:devfrom
sammshen:fix-mp-correctness

Conversation

@sammshen
Copy link
Copy Markdown
Contributor

@sammshen sammshen commented Mar 25, 2026

The MP throughput benchmark consistently shows LMCache ~46% faster than baseline on random workloads — where it should show a slight slowdown

Root cause: lm-eval sends 600 requests to the LMCache server but none to the baseline. With VLLM_ENABLE_V1_MULTIPROCESSING=0, the first batch of requests to a cold server incurs ~25s of tokenizer/template compilation overhead that gets included in the benchmark duration.

Verified on this machine: same server, same GPU, cold=81.50s vs warm=55.57s — delta of exactly 25.93s.

Fix:

  • Send 3 warmup chat completion requests to both servers before benchmarking
  • Add upper-bound check that fails if LMCache is >10% faster than baseline on
    random workload (catches future measurement asymmetries)

Samuel Shen added 2 commits March 25, 2026 04:01
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical inaccuracy in the LMCache throughput benchmark, where LMCache was consistently showing an unwarranted speedup over the baseline on random workloads. The root cause was identified as a cold-start overhead, specifically tokenizer and template compilation, that was only impacting the LMCache server during the benchmark. To rectify this, the changes introduce a pre-benchmarking warmup phase for both LMCache and baseline servers, ensuring a fair comparison. Additionally, a new sanity check has been implemented to detect and flag any future measurement asymmetries where LMCache might appear significantly faster than expected on non-cacheable workloads, thereby enhancing the reliability and integrity of performance metrics.

Highlights

  • Benchmark Accuracy Fix: Resolved an issue where LMCache throughput benchmarks on random workloads showed an artificial speedup due to cold-start overhead affecting only the LMCache server.
  • Server Warmup: Implemented a warmup_server function to send initial chat completion requests to both LMCache and baseline vLLM servers, ensuring tokenizer and template compilation overhead is accounted for before benchmarking.
  • Benchmark Sanity Check: Added a new validation step that fails the benchmark if LMCache is more than 10% faster than the baseline on random workloads, preventing misleading results from measurement asymmetries.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a sanity check for LMCache benchmark results on random workloads to detect measurement asymmetry and adds a warmup_server function to prevent cold-start overhead from skewing benchmark results. Feedback includes ensuring the curl command in the warmup function correctly handles HTTP errors and improving the efficiency of parsing the speedup check output by using read instead of cut.

# (BPE compilation, template compilation, etc.) which skews the
# benchmark since lm-eval (Step 3) only warms the LMCache server.
for i in $(seq 1 "$num_warmup"); do
curl -s -X POST "http://localhost:${port}/v1/chat/completions" \
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The curl command should use the -f (--fail) flag. Currently, if the server returns an HTTP error (e.g., 4xx or 5xx), curl will still exit with a status code of 0. Since the script runs with set -e, this means a failed warmup will not stop the script, leading to skewed benchmark results. Adding -f ensures that curl will exit with a non-zero status on server errors, causing the script to fail as intended.

Suggested change
curl -s -X POST "http://localhost:${port}/v1/chat/completions" \
curl -fs -X POST "http://localhost:${port}/v1/chat/completions" \

Comment on lines +196 to +197
speedup_status=$(echo "$speedup_check" | cut -d'|' -f1)
speedup_pct=$(echo "$speedup_check" | cut -d'|' -f2)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using cut in separate command substitutions to parse the output is less efficient than using the shell's built-in read command. You can parse both values in a single line without forking external processes, which is a common best practice in shell scripting.

Suggested change
speedup_status=$(echo "$speedup_check" | cut -d'|' -f1)
speedup_pct=$(echo "$speedup_check" | cut -d'|' -f2)
IFS='|' read -r speedup_status speedup_pct <<< "$speedup_check"

@sammshen sammshen requested review from ApostaC and deng451e March 25, 2026 08:23
@sammshen sammshen enabled auto-merge (squash) March 25, 2026 23:27
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC added the full Run comprehensive tests on this PR label Mar 25, 2026
@sammshen sammshen merged commit beaee2f into LMCache:dev Mar 26, 2026
36 checks passed
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…LMCache#2864)

test hypthesis

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Co-authored-by: Samuel Shen <slshen@uchciago.edu>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…LMCache#2864)

test hypthesis

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Co-authored-by: Samuel Shen <slshen@uchciago.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants