Skip to content

[MP][optimize] optimize evict in lru policy#2740

Merged
chunxiaozheng merged 3 commits intoLMCache:devfrom
chunxiaozheng:mp-evict-optimize
Mar 12, 2026
Merged

[MP][optimize] optimize evict in lru policy#2740
chunxiaozheng merged 3 commits intoLMCache:devfrom
chunxiaozheng:mp-evict-optimize

Conversation

@chunxiaozheng
Copy link
Copy Markdown
Collaborator

@chunxiaozheng chunxiaozheng commented Mar 11, 2026

For one request, the later keys should be evicted first, for example, the request has (key1, key2, key3), if we first evict key1, due to prefix match, key2 and key3 will not be hit.

My test result is as follows:
Clipboard_Screenshot_1773227384

Signed-off-by: idellzheng <idellzheng@tencent.com>
Copy link
Copy Markdown
Collaborator

@maobaolong maobaolong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chunxiaozheng Thanks for this fix, LGTM.
@ApostaC @sammshen Would you like to take a look at this PR? Thanks!

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a strategic optimization to the LRU eviction policy by altering the order in which keys are processed during creation and access. The change specifically targets scenarios where keys might have prefix dependencies, ensuring that the eviction logic prioritizes keys in a way that maximizes the likelihood of subsequent cache hits for related data. This refinement is expected to lead to more efficient cache utilization and improved performance.

Highlights

  • LRU Eviction Optimization: The processing order for keys in the on_keys_created and on_keys_touched methods of the LRU eviction policy has been reversed. This change ensures that for a given request, later keys are considered for eviction before earlier ones, which is crucial for scenarios involving prefix-matched keys.
  • Improved Cache Hit Rate: By prioritizing the eviction of later keys, the system aims to prevent premature eviction of prefix keys that might invalidate subsequent related keys, thereby improving the overall cache hit rate.
  • Code Clarity: New comments have been added to explain the rationale behind reversing the key processing order, providing better context for future maintainers.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/distributed/eviction_policy/lru.py
    • Reversed the iteration order for keys in the on_keys_created method to process later keys first.
    • Reversed the iteration order for keys in the on_keys_touched method to process later keys first.
    • Added explanatory comments detailing the reasoning for the reversed key processing order in both methods.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request optimizes the LRU eviction policy for a specific prefix-caching use case by reversing the order of keys processed in a batch. While the intention is good, this change breaks existing unit tests that rely on the previous batch processing order. I've pointed this out as a high-severity issue. Additionally, I've made a couple of medium-severity suggestions to improve the clarity and reduce duplication in the comments that were added to explain the change.

# NOTE: for the request, the later keys should be evicted first.
# For example, the request has (key1, key2, key3), if we first
# evict key1, due to prefix match, key2 and key3 will not be hit.
for key in reversed(keys):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change, while addressing the prefix caching scenario, seems to break existing unit tests. For example, TestLRUEvictionPolicyOrdering.test_touch_moves_key_to_most_recent and test_create_existing_key_moves_to_most_recent in tests/v1/distributed/test_lru_eviction_policy.py will likely fail.

These tests create keys in a batch, e.g., policy.on_keys_created([make_key(1), make_key(2), make_key(3)]), and then assert on the LRU order.

  • Before your change: The order of keys in the _order dict would be 1, 2, 3 (from LRU to MRU).
  • After your change: The order becomes 3, 2, 1.

This change in ordering breaks the assertions in those tests. Please update the tests to reflect the new intended behavior and ensure no existing functionality is regressed.

Comment on lines +80 to +82
# NOTE: for the request, the later keys should be evicted first.
# For example, the request has (key1, key2, key3), if we first
# evict key1, due to prefix match, key2 and key3 will not be hit.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is helpful for context, but it's duplicated in on_keys_touched and the explanation could be clearer. To improve maintainability and clarity, consider replacing this comment and the one in on_keys_touched with a more descriptive one.

Suggested alternative:

# When processing a batch of keys from a single request (e.g., for prefix caching),
# keys are often ordered from shortest to longest prefix. To maximize cache hits,
# we should evict longer prefixes before shorter ones. By reversing the list,
# we mark shorter prefixes (which appear earlier in the original list) as more
# recently used, making them less likely to be evicted.

Comment on lines +100 to +101
# NOTE: for the request, the later keys should be evicted first.
# The example is the same as `on_keys_created`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is duplicated from on_keys_created. To improve maintainability, please use a consistent and clear comment in both places, as suggested in the feedback for on_keys_created.

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@chunxiaozheng chunxiaozheng enabled auto-merge (squash) March 12, 2026 02:01
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 12, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
Signed-off-by: idellzheng <idellzheng@tencent.com>
@chunxiaozheng chunxiaozheng merged commit bdf4c9a into LMCache:dev Mar 12, 2026
26 of 28 checks passed
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: idellzheng <idellzheng@tencent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants