Skip to content

[MP] Protocol with Single Key#2584

Merged
ApostaC merged 6 commits intoLMCache:devfrom
Oasis-Git:unbatch
Feb 27, 2026
Merged

[MP] Protocol with Single Key#2584
ApostaC merged 6 commits intoLMCache:devfrom
Oasis-Git:unbatch

Conversation

@Oasis-Git
Copy link
Copy Markdown
Member

What this PR does / why we need it:
Set the protocol from list of IPC key to Single IPC key

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @Oasis-Git, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the IPC mechanism for KV cache operations within the LMCache system. The primary goal is to simplify the protocol by transitioning from handling lists of IPC keys to processing individual keys for store and retrieve requests. This change streamlines the internal data structures, type definitions, and request submission logic, making the system more robust and easier to maintain. The modifications ensure that both individual and batched operations now consistently utilize a single-key paradigm.

Highlights

  • Simplified IPC Protocol: The Inter-Process Communication (IPC) protocol for KV cache STORE and RETRIEVE operations has been simplified to use a single key instead of a list of keys, streamlining request handling.
  • Type Definition Updates: Type definitions for RetrieveResult and internal future tracking (store_futures, retrieve_futures) were updated to reflect the new single-key approach, improving clarity and consistency.
  • Refactored Batch Processing: Batch processing for store and retrieve requests was refactored to delegate to individual single-key submission methods, reducing redundancy and simplifying logic.
  • Core Server Logic Alignment: Core server-side logic, including key resolution (resolve_key) and cache store/retrieve methods, was modified to consistently process individual keys, aligning with the updated protocol.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/integration/vllm/vllm_multi_process_adapter.py
    • Changed RetrieveResult type definition from list[bool] to bool.
    • Updated store_futures and retrieve_futures dictionary value types from tuple[MessagingFuture, list[str]] to MessagingFuture.
    • Modified submit_store_request and submit_retrieve_request to send a single key.
    • Refactored batched_submit_store_requests and batched_submit_retrieve_requests to iterate and call their respective single-key submission methods.
    • Simplified get_finished logic by removing handling for other_reqs and all(r_result).
  • lmcache/v1/distributed/api.py
    • Removed an assertion related to chunk_hash in ipc_keys_to_object_keys.
  • lmcache/v1/multiprocess/protocols/engine.py
    • Updated STORE and RETRIEVE protocol definitions to expect a single KeyType in the payload and return a single bool as a response.
  • lmcache/v1/multiprocess/server.py
    • Renamed resolve_keys function to resolve_key and modified it to accept and process a single IPCCacheEngineKey.
    • Updated store method signature and implementation to accept a single IPCCacheEngineKey.
    • Updated retrieve method signature and implementation to accept a single IPCCacheEngineKey and return a single bool.
    • Adjusted error handling in retrieve to return a single bool.
Activity
  • No explicit activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Oasis-Git Oasis-Git added the full Run comprehensive tests on this PR label Feb 10, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the multiprocess communication protocol to use a single IPC key for STORE and RETRIEVE operations, instead of a list of keys. This simplifies the protocol definition and the server-side implementation. The client-side adapter (vllm_multi_process_adapter.py) has been updated accordingly, notably by removing the client-side batching logic in batched_submit_store_requests and batched_submit_retrieve_requests. While this change simplifies the code, it may have performance implications. I've also suggested a minor improvement for robustness in the zip calls.

Comment on lines 382 to +383
for request_id, op in zip(request_ids, ops, strict=False):
all_keys.append(
self._create_key(op.token_ids, op.start, op.end, request_id=request_id)
)
block_ids.extend(op.block_ids)
future = send_lmcache_request(
self.mq_client,
RequestType.STORE,
[
all_keys,
self.instance_id,
block_ids,
event.ipc_handle(),
],
).to_cuda_future()
self.store_futures[request_ids[0]] = (future, list(request_ids[1:]))
self.submit_store_request(request_id, op, event)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change replaces a single batched request with multiple individual requests sent in a loop. This is likely to increase overhead from network communication and serialization, potentially impacting performance. While this aligns with the single-key protocol change, the performance trade-off should be considered. A similar change is present in batched_submit_retrieve_requests.

"""
all_keys: list[IPCCacheEngineKey] = []
block_ids: list[int] = []
for request_id, op in zip(request_ids, ops, strict=False):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring states that ops and request_ids should have the same length. Using zip(..., strict=True) would enforce this and raise a ValueError if the lengths differ, which is safer than silently truncating to the shorter list.

Suggested change
for request_id, op in zip(request_ids, ops, strict=False):
for request_id, op in zip(request_ids, ops, strict=True):

"""
all_keys: list[IPCCacheEngineKey] = []
block_ids: list[int] = []
for request_id, op in zip(request_ids, ops, strict=False):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to batched_submit_store_requests, the docstring states that ops and request_ids should have the same length. Using zip(..., strict=True) would be safer here as well to prevent silently ignoring mismatched inputs.

Suggested change
for request_id, op in zip(request_ids, ops, strict=False):
for request_id, op in zip(request_ids, ops, strict=True):

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general. Please fix the tests

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Copy link
Copy Markdown
Contributor

@KuntaiDu KuntaiDu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC merged commit 418f58f into LMCache:dev Feb 27, 2026
24 checks passed
sammshen pushed a commit to sammshen/LMCache that referenced this pull request Mar 1, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
hlin99 pushed a commit to hlin99/LMCache that referenced this pull request Mar 2, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
oferki pushed a commit to oferki/LMCache that referenced this pull request Mar 3, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Ofer Kiselov Nahman <ofer.kiselovnahman@weka.io>
oferki pushed a commit to oferki/LMCache that referenced this pull request Mar 3, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
mauryaavinash95 pushed a commit to mauryaavinash95/LMCache that referenced this pull request Mar 7, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
shaoxiawjc pushed a commit to shaoxiawjc/LMCache that referenced this pull request Mar 11, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: shaoxiawjc <wjc2800@163.com>
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants