Skip to content

[Core] Add VRAM_SEG support for NIXL OBJ plugin#2640

Merged
DongDongJu merged 2 commits intoLMCache:devfrom
jgoldsch12:vram_for_obj
Mar 19, 2026
Merged

[Core] Add VRAM_SEG support for NIXL OBJ plugin#2640
DongDongJu merged 2 commits intoLMCache:devfrom
jgoldsch12:vram_for_obj

Conversation

@jgoldsch12
Copy link
Copy Markdown
Contributor

This commit introduces VRAM_SEG support for the NIXL OBJ backend. With the introduction of "accelerated" engines for the NIXL OBJ backend, the VRAM_SEG memory type can now be supported and should not be rejected by the LMCache NIXL storage backend. This allows a user to specify a 'nixl_device_buffer' of type 'cuda' in the LMCache configuration. Previously, this type would fail when used with the OBJ nixl backend.

Signed-off-by: Jason Goldschmidt jason.goldschmidt@dell.com

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @jgoldsch12, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces crucial support for VRAM_SEG memory types within the NIXL OBJ storage backend. The primary goal is to allow users to specify 'cuda' as a nixl_device_buffer type in LMCache configurations when using the OBJ backend, which was previously unsupported. This enhancement is achieved by updating the backend validation logic to recognize CUDA devices for OBJ storage and by refining memory registration to handle VRAM allocations more robustly through page-sized segmentation.

Highlights

  • VRAM_SEG Support for NIXL OBJ Backend: Enabled support for the 'cuda' device type (representing VRAM_SEG) for the NIXL OBJ backend, allowing its use in LMCache configurations.
  • Enhanced Memory Registration: Modified the memory registration process within init_mem_handlers to break down large memory buffers into page-sized chunks, ensuring compatibility and preventing buffer size limitations of the underlying plugin.
  • Updated Backend Validation Logic: Adjusted the validate_nixl_backend function to correctly permit the 'cuda' device for the OBJ backend under both dynamic and non-dynamic storage configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/storage_backend/nixl_storage_backend.py
    • Updated validate_nixl_backend to allow 'cuda' as a valid device for the 'OBJ' backend when dynamic_storage is true.
    • Modified validate_nixl_backend to include 'OBJ' in the list of backends that support both 'cpu' and 'cuda' devices when dynamic_storage is false.
    • Refactored init_mem_handlers to register memory in page-sized chunks instead of a single large block, improving compatibility with underlying plugin buffer size limits.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces VRAM_SEG support for the NIXL OBJ backend. This is achieved by updating validate_nixl_backend to permit 'cuda' devices with the 'OBJ' backend for both dynamic and static storage configurations. Additionally, init_mem_handlers is modified to register memory in page-sized chunks, which prevents exceeding the maximum buffer size of the underlying NIXL plugin. This change aligns memory registration with the existing transfer descriptor creation logic.

@jgoldsch12
Copy link
Copy Markdown
Contributor Author

@vvenkates27 who could be another approver for this PR?

@vvenkates27
Copy link
Copy Markdown

@vvenkates27 who could be another approver for this PR?

@yanok and @sammshen can you have a look?

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@sammshen sammshen requested a review from deng451e March 13, 2026 21:19
Copy link
Copy Markdown
Collaborator

@DongDongJu DongDongJu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@DongDongJu DongDongJu enabled auto-merge (squash) March 16, 2026 14:05
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 16, 2026
@DongDongJu DongDongJu merged commit ae7af06 into LMCache:dev Mar 19, 2026
23 of 26 checks passed
hyunyul-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Mar 20, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 21, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 25, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 27, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Add VRAM_SEG support for NIXL OBJ plugin

Signed-off-by: Jason Goldschmidt <jason.goldschmidt@dell.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants