[CI]: Fix the LMCache random throughput being higher than native vllm#2864
[CI]: Fix the LMCache random throughput being higher than native vllm#2864sammshen merged 2 commits intoLMCache:devfrom
Conversation
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical inaccuracy in the LMCache throughput benchmark, where LMCache was consistently showing an unwarranted speedup over the baseline on random workloads. The root cause was identified as a cold-start overhead, specifically tokenizer and template compilation, that was only impacting the LMCache server during the benchmark. To rectify this, the changes introduce a pre-benchmarking warmup phase for both LMCache and baseline servers, ensuring a fair comparison. Additionally, a new sanity check has been implemented to detect and flag any future measurement asymmetries where LMCache might appear significantly faster than expected on non-cacheable workloads, thereby enhancing the reliability and integrity of performance metrics. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a sanity check for LMCache benchmark results on random workloads to detect measurement asymmetry and adds a warmup_server function to prevent cold-start overhead from skewing benchmark results. Feedback includes ensuring the curl command in the warmup function correctly handles HTTP errors and improving the efficiency of parsing the speedup check output by using read instead of cut.
| # (BPE compilation, template compilation, etc.) which skews the | ||
| # benchmark since lm-eval (Step 3) only warms the LMCache server. | ||
| for i in $(seq 1 "$num_warmup"); do | ||
| curl -s -X POST "http://localhost:${port}/v1/chat/completions" \ |
There was a problem hiding this comment.
The curl command should use the -f (--fail) flag. Currently, if the server returns an HTTP error (e.g., 4xx or 5xx), curl will still exit with a status code of 0. Since the script runs with set -e, this means a failed warmup will not stop the script, leading to skewed benchmark results. Adding -f ensures that curl will exit with a non-zero status on server errors, causing the script to fail as intended.
| curl -s -X POST "http://localhost:${port}/v1/chat/completions" \ | |
| curl -fs -X POST "http://localhost:${port}/v1/chat/completions" \ |
| speedup_status=$(echo "$speedup_check" | cut -d'|' -f1) | ||
| speedup_pct=$(echo "$speedup_check" | cut -d'|' -f2) |
There was a problem hiding this comment.
Using cut in separate command substitutions to parse the output is less efficient than using the shell's built-in read command. You can parse both values in a single line without forking external processes, which is a common best practice in shell scripting.
| speedup_status=$(echo "$speedup_check" | cut -d'|' -f1) | |
| speedup_pct=$(echo "$speedup_check" | cut -d'|' -f2) | |
| IFS='|' read -r speedup_status speedup_pct <<< "$speedup_check" |
…LMCache#2864) test hypthesis Signed-off-by: Samuel Shen <slshen@uchciago.edu> Co-authored-by: Samuel Shen <slshen@uchciago.edu>
…LMCache#2864) test hypthesis Signed-off-by: Samuel Shen <slshen@uchciago.edu> Co-authored-by: Samuel Shen <slshen@uchciago.edu>
The MP throughput benchmark consistently shows LMCache ~46% faster than baseline on random workloads — where it should show a slight slowdown
Root cause: lm-eval sends 600 requests to the LMCache server but none to the baseline. With VLLM_ENABLE_V1_MULTIPROCESSING=0, the first batch of requests to a cold server incurs ~25s of tokenizer/template compilation overhead that gets included in the benchmark duration.
Verified on this machine: same server, same GPU, cold=81.50s vs warm=55.57s — delta of exactly 25.93s.
Fix:
random workload (catches future measurement asymmetries)