[Correctness]: Fix the overlapping race condition for non-MP as well#2706
[Correctness]: Fix the overlapping race condition for non-MP as well#2706ApostaC merged 4 commits intoLMCache:devfrom
Conversation
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a critical correctness issue where concurrent read and write operations on shared KV cache blocks could lead to data corruption. By explicitly tracking and accounting for tokens already cached by VLLM, the system now intelligently skips transferring redundant or conflicting data, thereby eliminating the race condition and ensuring data integrity during KV cache management. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Signed-off-by: Samuel Shen <slshen@uchciago.edu>
|
|
||
| echo "[INFO] Preparing LMCache config (cpu.yaml)..." | ||
| cat <<EOF > cpu.yaml | ||
| chunk_size: 16 |
There was a problem hiding this comment.
this is why the correctness tests could not catch the error before
| token_mask[:lmcache_cached_tokens], | ||
| kvcaches=kvcaches, | ||
| slot_mapping=slot_mapping[:lmcache_cached_tokens], | ||
| vllm_cached_tokens=request.load_spec.vllm_cached_tokens, |
There was a problem hiding this comment.
we pass this as kwargs which eventually makes its way to gpu connector
There was a problem hiding this comment.
Code Review
This pull request aims to address a race condition for non-MP configurations by passing vllm_cached_tokens to GPU data transfer operations, enabling the skipping of shared token prefixes to prevent data corruption. However, the current implementation lacks proper bounds checking for the number of tokens to skip, which can lead to an integer underflow in the underlying C++ GPU kernel. This critical vulnerability could result in a massive grid launch and out-of-bounds memory access (heap buffer overflow) on the host system. It is highly recommended to cap the skip count by the chunk size in the Python connector and add defensive checks in the C++ kernel. Additionally, I've noted a small piece of duplicated code that could be refactored for improved maintainability.
DongDongJu
left a comment
There was a problem hiding this comment.
LGTM.
So, my understanding is that layerwise is enough to small size which not happening this problem.
|
@DongDongJu layerwise should have the same issue but has other bugs that prevent revealign this one |
LoL. I see. do you know what is the left bug for layerwise by any chance? |
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
|
@DongDongJu not sure, did your StopIteration fix go through? |
|
I see that you mention the problem disappeared: #2268 |
…MCache#2706) * use the previous skip_n_prefix_tokens codepath * change the chunk size of both correctnes tesets Signed-off-by: Samuel Shen <slshen@uchciago.edu>
…MCache#2706) * use the previous skip_n_prefix_tokens codepath * change the chunk size of both correctnes tesets Signed-off-by: Samuel Shen <slshen@uchciago.edu> Signed-off-by: Aaron Wu <aaron.wu@dell.com>
…MCache#2706) * use the previous skip_n_prefix_tokens codepath * change the chunk size of both correctnes tesets Signed-off-by: Samuel Shen <slshen@uchciago.edu>
…MCache#2706) * use the previous skip_n_prefix_tokens codepath * change the chunk size of both correctnes tesets Signed-off-by: Samuel Shen <slshen@uchciago.edu>
essentially the same issue as #2671 where the read and write stream have a race condition on shared prefix blocks across requests.
Why was this not caught by the previous correctnes tests.
the high concurrency had the same lmcache chunk size as the vllm block size which avoids this error.
the single request only had one request which was not enough concurrency to tirgger the race condition.
this PR also changes the chunk size of teh high concucrrency correctness test back to 256