Skip to content

[MP][Feat] support worker-affinity in the MQ thread pool#2842

Merged
ApostaC merged 5 commits intoLMCache:devfrom
ApostaC:local-dev/mp-thread-affinity
Mar 24, 2026
Merged

[MP][Feat] support worker-affinity in the MQ thread pool#2842
ApostaC merged 5 commits intoLMCache:devfrom
ApostaC:local-dev/mp-thread-affinity

Conversation

@ApostaC
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC commented Mar 21, 2026

What this PR does / why we need it:

Introduces thread affinity routing for GPU-bound request handlers in the multiprocess message queue server. All STORE/RETRIEVE requests from the same vLLM instance (identified by zmq identity) are always dispatched to the same worker thread. This eliminates the need for gpu_context.transfer_lock since same-instance GPU transfers are now inherently serialized.

Also cleans up pool naming:

  • Affinity pool — for GPU-bound handlers (STORE/RETRIEVE). Routes by hash(zmq_identity) % N so same-client work is serialized on one thread.
  • Normal pool — for non-GPU blocking handlers (LOOKUP, END_SESSION, etc.). Standard ThreadPoolExecutor.
  • The old default thread pool is removed; all blocking handlers must be explicitly assigned to a pool.

New CLI args (backward compatible):

  • --max-workers sets both pools (existing behavior preserved)
  • --max-gpu-workers overrides the affinity pool size
  • --max-cpu-workers overrides the normal pool size

Special notes for your reviewers:

  • blend_server.py is deprecated but updated minimally to avoid breakage
  • MPCacheEngine.lock (used only in clear()) is unrelated and left as-is

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: ApostaC <yihua98@uchicago.edu>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the multiprocess message queue server's concurrency model by introducing specialized thread pools. The primary goal is to improve performance and prevent deadlocks in GPU-bound operations by ensuring that requests from a single vLLM instance are processed sequentially on a dedicated worker thread. This change enhances the server's ability to manage diverse workloads more efficiently and provides finer-grained control over resource allocation through new configuration options.

Highlights

  • Worker Affinity Thread Pool: Introduced a new AffinityThreadPool that routes tasks with the same affinity_key to the same worker thread, ensuring sequential execution for GPU-bound operations (STORE/RETRIEVE) from the same vLLM instance. This eliminates the need for per-instance GPU transfer locks.
  • Thread Pool Refactoring: The message queue server now explicitly uses two types of thread pools: an 'Affinity pool' for GPU-bound handlers and a 'Normal pool' for non-GPU blocking handlers. The old default thread pool has been removed, requiring all blocking handlers to be explicitly assigned to a pool.
  • CLI Argument Enhancements: Added new command-line arguments --max-gpu-workers and --max-cpu-workers to allow separate configuration of worker counts for the affinity and normal thread pools, respectively. The existing --max-workers now serves as a base default for both if not explicitly overridden.
  • GPU Transfer Lock Removal: The gpu_context.transfer_lock mechanism, previously used to serialize GPU-CPU data transfers, has been removed as its functionality is now inherently handled by the worker affinity routing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant architectural improvement by implementing an affinity-based thread pool for GPU-bound operations. This elegantly removes the need for a transfer_lock, which should improve concurrency and performance. The changes are well-structured, separating the new AffinityThreadPool into its own module and cleanly refactoring the MessageQueueServer to use distinct pools for GPU and CPU tasks. The removal of the default thread pool in favor of explicit assignment is a strong design choice that enhances clarity and robustness. The accompanying documentation and test updates are thorough. My feedback includes suggestions to add validation for worker counts to improve robustness against invalid user input.

Comment thread lmcache/v1/multiprocess/affinity_pool.py
Comment on lines +148 to +150
base = args.max_workers
max_gpu = args.max_gpu_workers if args.max_gpu_workers is not None else base
max_cpu = args.max_cpu_workers if args.max_cpu_workers is not None else base
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The argument parsing logic allows for non-positive values for worker counts (e.g., --max-workers 0), which will cause a ValueError when creating the thread pools. It would be more user-friendly to validate these values here and provide a specific error message.

    base = args.max_workers
    max_gpu = args.max_gpu_workers if args.max_gpu_workers is not None else base
    max_cpu = args.max_cpu_workers if args.max_cpu_workers is not None else base

    if base <= 0:
        raise ValueError(f"--max-workers must be a positive integer, but got {base}")
    if max_gpu <= 0:
        raise ValueError(f"Resolved GPU worker count must be positive, but got {max_gpu}")
    if max_cpu <= 0:
        raise ValueError(f"Resolved CPU worker count must be positive, but got {max_cpu}")

@ApostaC ApostaC added the mp Buildkite trigger for multi-processing mode test label Mar 23, 2026
Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@ApostaC ApostaC added the full Run comprehensive tests on this PR label Mar 24, 2026
@deng451e deng451e self-requested a review March 24, 2026 00:43
Signed-off-by: ApostaC <yihua98@uchicago.edu>
@ApostaC ApostaC merged commit 3137ce0 into LMCache:dev Mar 24, 2026
25 checks passed
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 26, 2026
* [add] thread affinity executor

* Add initialization and docs

* deprecate old blend server
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 27, 2026
* [add] thread affinity executor

* Add initialization and docs

* deprecate old blend server
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* [add] thread affinity executor

* Add initialization and docs

* deprecate old blend server
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
* [add] thread affinity executor

* Add initialization and docs

* deprecate old blend server
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR mp Buildkite trigger for multi-processing mode test

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants