Skip to content

[Correctness]: Fix the overlapping race condition for non-MP as well#2706

Merged
ApostaC merged 4 commits intoLMCache:devfrom
sammshen:non-mp-overlap-fix
Mar 13, 2026
Merged

[Correctness]: Fix the overlapping race condition for non-MP as well#2706
ApostaC merged 4 commits intoLMCache:devfrom
sammshen:non-mp-overlap-fix

Conversation

@sammshen
Copy link
Copy Markdown
Contributor

@sammshen sammshen commented Mar 6, 2026

essentially the same issue as #2671 where the read and write stream have a race condition on shared prefix blocks across requests.

Why was this not caught by the previous correctnes tests.

the high concurrency had the same lmcache chunk size as the vllm block size which avoids this error.
the single request only had one request which was not enough concurrency to tirgger the race condition.

this PR also changes the chunk size of teh high concucrrency correctness test back to 256

Samuel Shen added 2 commits March 6, 2026 15:25
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical correctness issue where concurrent read and write operations on shared KV cache blocks could lead to data corruption. By explicitly tracking and accounting for tokens already cached by VLLM, the system now intelligently skips transferring redundant or conflicting data, thereby eliminating the race condition and ensuring data integrity during KV cache management.

Highlights

  • Race Condition Fix: Addressed a race condition between read and write streams on shared prefix blocks, which was previously observed in a similar context.
  • VLLM Cached Tokens Handling: Introduced vllm_cached_tokens as a parameter in KV cache retrieval and transfer operations to correctly account for tokens already cached by VLLM.
  • GPU Transfer Logic: Modified GPU transfer functions to skip a prefix of tokens during host-to-device transfers, preventing overwrites of data already present in the VLLM cache.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/integration/vllm/vllm_v1_adapter.py
    • Added vllm_cached_tokens parameter to lmcache_engine.retrieve_layer calls within start_load_kv.
  • lmcache/v1/gpu_connector/gpu_connectors.py
    • Introduced logic in to_gpu methods to calculate skip_prefix_n_tokens based on vllm_cached_tokens.
    • Passed skip_prefix_n_tokens to lmc_ops.multi_layer_kv_transfer to prevent overwriting cached data.
Activity
  • No specific activity has been recorded for this pull request since its creation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: Samuel Shen <slshen@uchciago.edu>

echo "[INFO] Preparing LMCache config (cpu.yaml)..."
cat <<EOF > cpu.yaml
chunk_size: 16
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is why the correctness tests could not catch the error before

token_mask[:lmcache_cached_tokens],
kvcaches=kvcaches,
slot_mapping=slot_mapping[:lmcache_cached_tokens],
vllm_cached_tokens=request.load_spec.vllm_cached_tokens,
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we pass this as kwargs which eventually makes its way to gpu connector

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to address a race condition for non-MP configurations by passing vllm_cached_tokens to GPU data transfer operations, enabling the skipping of shared token prefixes to prevent data corruption. However, the current implementation lacks proper bounds checking for the number of tokens to skip, which can lead to an integer underflow in the underlying C++ GPU kernel. This critical vulnerability could result in a massive grid launch and out-of-bounds memory access (heap buffer overflow) on the host system. It is highly recommended to cap the skip count by the chunk size in the Python connector and add defensive checks in the C++ kernel. Additionally, I've noted a small piece of duplicated code that could be refactored for improved maintainability.

Comment thread lmcache/v1/gpu_connector/gpu_connectors.py Outdated
Comment thread lmcache/v1/gpu_connector/gpu_connectors.py Outdated
Copy link
Copy Markdown
Collaborator

@DongDongJu DongDongJu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
So, my understanding is that layerwise is enough to small size which not happening this problem.

@sammshen
Copy link
Copy Markdown
Contributor Author

sammshen commented Mar 9, 2026

@DongDongJu layerwise should have the same issue but has other bugs that prevent revealign this one

@DongDongJu
Copy link
Copy Markdown
Collaborator

@DongDongJu layerwise should have the same issue but has other bugs that prevent revealign this one

LoL. I see. do you know what is the left bug for layerwise by any chance?

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
@sammshen
Copy link
Copy Markdown
Contributor Author

sammshen commented Mar 9, 2026

@DongDongJu not sure, did your StopIteration fix go through?

@sammshen
Copy link
Copy Markdown
Contributor Author

I see that you mention the problem disappeared: #2268

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC enabled auto-merge (squash) March 12, 2026 21:01
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 12, 2026
@ApostaC ApostaC merged commit 0a132d4 into LMCache:dev Mar 13, 2026
27 of 29 checks passed
hyunyul-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Mar 20, 2026
…MCache#2706)

* use the previous skip_n_prefix_tokens codepath

* change the chunk size of both correctnes tesets

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
…MCache#2706)

* use the previous skip_n_prefix_tokens codepath

* change the chunk size of both correctnes tesets

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…MCache#2706)

* use the previous skip_n_prefix_tokens codepath

* change the chunk size of both correctnes tesets

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…MCache#2706)

* use the previous skip_n_prefix_tokens codepath

* change the chunk size of both correctnes tesets

Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants