Skip to content

Fault Tolerance Check#2692

Merged
KuntaiDu merged 21 commits intoLMCache:devfrom
Oasis-Git:fault-t
Mar 12, 2026
Merged

Fault Tolerance Check#2692
KuntaiDu merged 21 commits intoLMCache:devfrom
Oasis-Git:fault-t

Conversation

@Oasis-Git
Copy link
Copy Markdown
Member

What this PR does / why we need it:

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the fault tolerance of the LMCache vLLM multi-process adapter by integrating a robust health-checking system. It introduces a dedicated heartbeat thread to continuously monitor the LMCache server's health. When the server becomes unresponsive, the adapter transitions into a degraded mode, preventing new operations and managing existing ones to ensure system stability. This change improves the overall resilience and reliability of the LMCache integration by proactively addressing potential server failures.

Highlights

  • Fault Tolerance Implementation: Introduced a comprehensive fault tolerance mechanism for the LMCache vLLM multi-process adapter, allowing it to detect and gracefully handle an unhealthy LMCache server.
  • Heartbeat Thread: Implemented a new HeartbeatThread that periodically sends HEALTH_CHECK requests to the LMCache server, monitoring its operational status.
  • Degraded Mode: The adapter now enters a 'degraded mode' if the LMCache server is detected as unhealthy, preventing new requests and managing pending ones to avoid system failures.
  • Timeout Handling: Added explicit TimeoutError handling for various LMCache message queue operations, such as get_lmcache_chunk_size, register_kv_caches, and unregister_kv_cache, improving robustness.
  • Health Check Protocol: Extended the LMCache messaging protocol with a new HEALTH_CHECK request type and a corresponding server-side handler that performs a memory check.
  • Error Tracking for Retrieves: Added a mechanism to track and report block IDs that failed during retrieve operations due to timeouts or an unhealthy server state.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/integration/vllm/vllm_multi_process_adapter.py
    • Imported threading and PeriodicThread for concurrent operations.
    • Defined LMCACHE_MQ_TIMEOUT and LMCACHE_HEARTBEAT_INTERVAL for configurable timeouts and intervals.
    • Added send_health_check function to query server health.
    • Implemented HeartbeatThread to periodically check server health and manage an internal health event.
    • Modified get_lmcache_chunk_size to include a timeout and raise ConnectionError on failure.
    • Initialized a threading.Event for health status and started the HeartbeatThread in adapter constructors.
    • Added an is_healthy property to both LMCacheVLLMAdapter and LMCacheVLLMWorkerAdapter.
    • Integrated health checks into maybe_submit_lookup_request, check_lookup_result, free_lookup_locks, end_session, submit_store_request, and submit_retrieve_request to prevent operations when unhealthy.
    • Updated retrieve_futures to store block IDs alongside the future for better error tracking.
    • Introduced error_block_ids to accumulate block IDs that failed during retrieve operations.
    • Added timeout handling to register_kv_caches and shutdown methods.
    • Implemented get_block_ids_with_load_errors to retrieve and clear error block IDs.
  • lmcache/v1/multiprocess/protocols/base.py
    • Added HEALTH_CHECK as a new request type to the RequestType enum.
  • lmcache/v1/multiprocess/protocols/controller.py
    • Included HEALTH_CHECK in the REQUEST_NAMES list.
    • Defined the protocol for HEALTH_CHECK, specifying no payload and a boolean response.
  • lmcache/v1/multiprocess/server.py
    • Implemented a health_check method that returns the result of storage_manager.memcheck().
    • Registered the HEALTH_CHECK request type with its corresponding health_check handler.
Activity
  • The pull request description is empty, providing no initial context from the author.
  • No comments or reviews have been made on this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces fault tolerance to the vLLM multi-process adapter by adding a heartbeat thread for server health checks. When the server is unhealthy, the adapters enter a degraded mode, failing fast on operations. Timeouts have also been added to several message queue operations to prevent indefinite blocking. The changes are well-implemented and significantly improve robustness. I have a couple of suggestions to improve documentation and code maintainability.

"""Periodically checks server health via HEALTH_CHECK.

Manages a threading.Event that adapters use to gate operations.
Once unhealthy, the adapter degrades permanently (no recovery).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring states that the adapter degrades permanently, but the implementation allows for recovery. The _execute method will call self._health_event.set() if the health check passes again, allowing the adapter to resume normal operation. Please update the docstring to reflect this recovery capability.

Suggested change
Once unhealthy, the adapter degrades permanently (no recovery).
Once unhealthy, the adapter enters a degraded mode, and can recover once the server is healthy again.

Comment on lines +184 to +206
try:
self.chunk_size = get_lmcache_chunk_size(self.mq_client)
except TimeoutError:
self.mq_client.close()
raise ConnectionError(
f"LMCache server did not respond within {LMCACHE_MQ_TIMEOUT}s. "
"Is the server running?"
) from None
assert self.chunk_size % vllm_block_size == 0, (
"LMCache chunk size should be a multiple of vLLM block size"
)
self.blocks_in_chunk = self.chunk_size // vllm_block_size

# Health state (shared with heartbeat thread)
self._health_event = threading.Event()
self._health_event.set()

# Start heartbeat thread
self._heartbeat = HeartbeatThread(
mq_client=self.mq_client,
health_event=self._health_event,
)
self._heartbeat.start()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The initialization logic for fault tolerance, including the try...except block for get_lmcache_chunk_size and the setup of the HeartbeatThread, is duplicated in LMCacheMPSchedulerAdapter.__init__ (here) and LMCacheMPWorkerAdapter.__init__ (lines 419-441). To improve maintainability and reduce redundancy, consider refactoring this common logic into a shared base class or a helper function.

Signed-off-by: yuweia <ayw.sirius19@gmail.com>
@Oasis-Git Oasis-Git added the full Run comprehensive tests on this PR label Mar 5, 2026
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some changes are needed. See the details below

Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py Outdated
Comment thread lmcache/v1/multiprocess/protocols/controller.py Outdated
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py Outdated
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py Outdated
)

for request_id, r_future in self.retrieve_futures.items():
for request_id, (r_future, r_block_ids) in self.retrieve_futures.items():
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
for request_id, (r_future, r_block_ids) in self.retrieve_futures.items():
for request_id, (r_future, _) in self.retrieve_futures.items():

@ApostaC
Copy link
Copy Markdown
Contributor

ApostaC commented Mar 9, 2026

cc @maobaolong for visibility.

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something more

Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py Outdated
Comment thread lmcache/integration/vllm/vllm_multi_process_adapter.py
"HEALTH_CHECK": ProtocolDefinition(
payload_classes=[],
response_class=bool,
handler_type=HandlerType.SYNC,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use BLOCKING as the handler type

Comment thread lmcache/v1/multiprocess/server.py Outdated
Comment thread lmcache/v1/multiprocess/server.py Outdated
…branch)

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
@ApostaC ApostaC mentioned this pull request Mar 10, 2026
2 tasks

# Timeout (seconds) for blocking MQ requests: initial chunk-size query,
# KV cache registration/unregistration, and other synchronous operations.
DEFAULT_MQ_TIMEOUT: float = 30.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use 300 here

"""
future = send_lmcache_request(mq_client, RequestType.GET_CHUNK_SIZE, [])
chunk_size = future.result()
chunk_size = future.result(timeout=DEFAULT_MQ_TIMEOUT)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are hard-coding the timeout here. Does this mean that people cannot overwrite this from vLLM's config?

Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Copy link
Copy Markdown
Contributor

@KuntaiDu KuntaiDu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@KuntaiDu KuntaiDu enabled auto-merge (squash) March 11, 2026 22:21
@KuntaiDu KuntaiDu merged commit d41ceaf into LMCache:dev Mar 12, 2026
33 of 38 checks passed
@Oasis-Git Oasis-Git deleted the fault-t branch March 12, 2026 21:45
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 20, 2026
* health check

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* lint

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* dev

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: yuweia <ayw.sirius19@gmail.com>

* add comment

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* add test

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Remove fault tolerance CI step (will be added in separate fault-t-ci branch)

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Rename HEALTH_CHECK to PING, add timeout params, extract helper

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

---------

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: yuweia <ayw.sirius19@gmail.com>
Signed-off-by: Aaron Wu <aaron.wu@dell.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* health check

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* lint

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* dev

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: yuweia <ayw.sirius19@gmail.com>

* add comment

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* add test

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Remove fault tolerance CI step (will be added in separate fault-t-ci branch)

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Rename HEALTH_CHECK to PING, add timeout params, extract helper

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

---------

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: yuweia <ayw.sirius19@gmail.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* health check

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* lint

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* dev

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: yuweia <ayw.sirius19@gmail.com>

* add comment

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* add test

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Remove fault tolerance CI step (will be added in separate fault-t-ci branch)

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* Rename HEALTH_CHECK to PING, add timeout params, extract helper

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix ut

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

* fix timeout

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>

---------

Signed-off-by: Oasis-Git <ayw.sirius19@gmail.com>
Signed-off-by: yuweia <ayw.sirius19@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants