Skip to content

[MP][Bugfix] fix free error when memory_objs is empty#2768

Merged
ApostaC merged 2 commits intoLMCache:devfrom
chunxiaozheng:mp-memory-free-bugfix
Mar 16, 2026
Merged

[MP][Bugfix] fix free error when memory_objs is empty#2768
ApostaC merged 2 commits intoLMCache:devfrom
chunxiaozheng:mp-memory-free-bugfix

Conversation

@chunxiaozheng
Copy link
Copy Markdown
Collaborator

@chunxiaozheng chunxiaozheng commented Mar 13, 2026

fix this error when not use lazy_memory_allocator in mp mode.
Clipboard_Screenshot_1773392685

In mp mode, if use MixedMemoryAllocator, when trigger write to L2, will first reserve_read L1 objects and then finish_read, finish_read will call self._memory_manager.free(need_to_free), but all mem_objs is not temp, and need_to_allocate is empty, this will trigger this bug.

Signed-off-by: idellzheng <idellzheng@tencent.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug where the batched_free function would encounter an error when invoked with an empty list of memory objects. This scenario was particularly problematic in multiprocessing environments not utilizing the lazy_memory_allocator. The change introduces robust guard clauses to ensure the function gracefully handles empty input, thereby enhancing the stability and reliability of memory management operations.

Highlights

  • Bugfix for batched_free function: Implemented checks to prevent errors when the batched_free function is called with an empty list of memory objects, specifically addressing issues in multiprocessing (MP) mode when lazy_memory_allocator is not in use.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/memory_management.py
    • Added a check at the beginning of the batched_free function (three occurrences) to immediately return if the memory_objs list is empty, preventing potential errors.
Activity
  • No human activity has been recorded on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a potential IndexError in MixedMemoryAllocator.batched_free that occurs when memory_objs is an empty list. The fix involves adding a guard clause at the beginning of the method to return early if the list is empty. This same protective check has been consistently applied to the batched_free methods in TensorMemoryAllocator and PagedTensorMemoryAllocator as well, improving robustness even though they were not susceptible to the same crash. The changes are correct and effectively resolve the bug.

Copy link
Copy Markdown
Collaborator

@maobaolong maobaolong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. This is an obvious bug, thanks for this fix.

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC enabled auto-merge (squash) March 15, 2026 01:57
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 15, 2026
@ApostaC ApostaC merged commit 223b579 into LMCache:dev Mar 16, 2026
33 of 37 checks passed
hyunyul-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Mar 20, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 25, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 27, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants