Skip to content

[Perf] Add stream priority in gpu context#2728

Merged
YaoJiayi merged 1 commit intodevfrom
localdev/ret-kernel
Mar 13, 2026
Merged

[Perf] Add stream priority in gpu context#2728
YaoJiayi merged 1 commit intodevfrom
localdev/ret-kernel

Conversation

@YaoJiayi
Copy link
Copy Markdown
Collaborator

What this PR does / why we need it:

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances performance by integrating high-priority CUDA streams into the GPU context management. By dedicating a high-priority stream for specific operations, such as memory retrieval, the system can ensure that critical tasks are processed more quickly, potentially reducing latency and improving overall responsiveness for these performance-sensitive workflows.

Highlights

  • High-Priority CUDA Streams: Introduced dedicated high-priority CUDA and CuPy streams within the GPUContext and _GPUContext classes to enable prioritized GPU operations.
  • Prioritized Memory Retrieval: Modified the _retrieve_loop function in the server to utilize the newly added high-priority CUDA stream, ensuring critical memory retrieval operations are executed with elevated priority.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/multiprocess/gpu_context.py
    • Added high_priority_cuda_stream_ and high_priority_cupy_stream_ attributes to both GPUContext and _GPUContext classes during initialization.
    • Implemented high_priority_stream and high_priority_cupy_stream property methods for both GPUContext and _GPUContext classes to expose the new high-priority streams.
  • lmcache/v1/multiprocess/server.py
    • Updated the _retrieve_loop function to use gpu_context.high_priority_stream for its CUDA stream context, prioritizing memory retrieval operations.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates high-priority CUDA streams into the GPU context for performance optimization, particularly for retrieve operations. However, this change introduces a critical race condition: retrieval and storage operations now use different streams while sharing the same temporary GPU buffer, which can lead to data corruption. Additionally, server-side cleanup logic in the retrieval path is not correctly synchronized. Furthermore, there is significant code duplication in lmcache/v1/multiprocess/gpu_context.py regarding the initialization and property definitions for cuda_stream_, high_priority_cuda_stream_, cupy_stream_, and high_priority_cupy_stream_ across GPUCacheContext and PlainGPUCacheContext, which should be refactored for better maintainability.

Comment thread lmcache/v1/multiprocess/gpu_context.py
Comment thread lmcache/v1/multiprocess/server.py
Comment thread lmcache/v1/multiprocess/gpu_context.py
Comment thread lmcache/v1/multiprocess/gpu_context.py
Comment thread lmcache/v1/multiprocess/gpu_context.py
@YaoJiayi YaoJiayi added the full Run comprehensive tests on this PR label Mar 10, 2026
Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@YaoJiayi YaoJiayi merged commit 34df767 into dev Mar 13, 2026
30 of 34 checks passed
@YaoJiayi YaoJiayi deleted the localdev/ret-kernel branch March 19, 2026 17:16
hyunyul-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Mar 20, 2026
add stream priority

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
add stream priority

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
add stream priority

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
add stream priority

Signed-off-by: YaoJiayi <120040070@link.cuhk.edu.cn>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants