Skip to content

[MP] Print inference request id to help identify which vllm request the current log belongs to#2812

Merged
maobaolong merged 3 commits intoLMCache:devfrom
maobaolong:reqId_mp
Mar 25, 2026
Merged

[MP] Print inference request id to help identify which vllm request the current log belongs to#2812
maobaolong merged 3 commits intoLMCache:devfrom
maobaolong:reqId_mp

Conversation

@maobaolong
Copy link
Copy Markdown
Collaborator

What this PR does / why we need it:

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

@chatgpt-codex-connector
Copy link
Copy Markdown

Codex usage limits have been reached for code reviews. Please check with the admins of this repo to increase the limits by adding credits.
Credits must be used to enable repository wide code reviews.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the traceability of prefetch operations within the system. By introducing an "external_request_id", it enables linking internal prefetch tasks to their originating external requests, which is crucial for debugging and understanding the flow of data in a distributed environment. The changes also enhance clarity by renaming an internal ID and enriching log messages with more relevant identifiers.

Highlights

  • Enhanced Request Tracing: Introduced an "external_request_id" field to the PrefetchHandle dataclass and the submit_prefetch_task method, allowing for end-to-end tracing of prefetch requests back to their original caller.
  • Clarity in Prefetch IDs: Renamed the "request_id" field within PrefetchHandle to "prefetch_request_id" to explicitly distinguish it as an internal L2 prefetch identifier.
  • Improved Logging: Updated debug and info logs for prefetch requests to include both the new "external_request_id" and "prefetch_request_id", providing more comprehensive context for debugging and monitoring.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively introduces an external_request_id to PrefetchHandle and submit_prefetch_task, enabling end-to-end tracing of prefetch requests. The renaming of request_id to prefetch_request_id within the PrefetchHandle dataclass and its usage throughout the StorageManager clarifies the distinct purposes of these identifiers. The logging statements have been updated to include both IDs, which will significantly aid in debugging and observability. The changes are consistently applied and well-integrated across the affected components.

maobaolong added a commit to maobaolong/LMCache that referenced this pull request Mar 23, 2026
…request the current log belongs to

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
LMCache#2812
chunxiaozheng pushed a commit to maobaolong/LMCache that referenced this pull request Mar 23, 2026
…request the current log belongs to (#4)

LMCache#2812

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

…he current log belongs to.

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Copy link
Copy Markdown
Contributor

@sammshen sammshen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@maobaolong maobaolong enabled auto-merge (squash) March 25, 2026 03:05
@github-actions github-actions Bot added the full Run comprehensive tests on this PR label Mar 25, 2026
@maobaolong maobaolong merged commit 2c4ac6b into LMCache:dev Mar 25, 2026
26 checks passed
realAaronWu pushed a commit to realAaronWu/LMCache that referenced this pull request Mar 26, 2026
…he current log belongs to (LMCache#2812)

* [MP] Print inference request id to help identify which vllm request the current log belongs to.

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

---------

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
deng451e pushed a commit to deng451e/LMCache that referenced this pull request Mar 27, 2026
…he current log belongs to (LMCache#2812)

* [MP] Print inference request id to help identify which vllm request the current log belongs to.

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

---------

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…he current log belongs to (LMCache#2812)

* [MP] Print inference request id to help identify which vllm request the current log belongs to.

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

---------

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
jooho-XCENA pushed a commit to xcena-dev/LMCache that referenced this pull request Apr 2, 2026
…he current log belongs to (LMCache#2812)

* [MP] Print inference request id to help identify which vllm request the current log belongs to.

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

* Fix

Signed-off-by: baoloongmao <baoloongmao@tencent.com>

---------

Signed-off-by: baoloongmao <baoloongmao@tencent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

full Run comprehensive tests on this PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants