Skip to content

[UX] Log full stack trace upon exception in MP mode#2653

Open
KuntaiDu wants to merge 3 commits intoLMCache:devfrom
KuntaiDu:kuntai-improve-trace
Open

[UX] Log full stack trace upon exception in MP mode#2653
KuntaiDu wants to merge 3 commits intoLMCache:devfrom
KuntaiDu:kuntai-improve-trace

Conversation

@KuntaiDu
Copy link
Copy Markdown
Contributor

What this PR does / why we need it:

Currently LMCache MP mode will not log full stack trace. This PR fixes that.

Special notes for your reviewers:

If applicable:

  • this PR contains user facing changes - docs added
  • this PR contains unit tests

Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the debugging capabilities of LMCache's multiprocessing mode by ensuring that all exceptions are logged with their complete stack traces. This change provides more comprehensive error information, making it easier to diagnose and resolve issues that arise during runtime in the distributed environment.

Highlights

  • Enhanced Error Logging: Replaced standard logger.error and logger.warning calls with logger.exception across multiple multiprocessing components to ensure that full stack traces are captured whenever an exception occurs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lmcache/v1/multiprocess/blend_server.py
    • Updated error logging for exceptions during retrieval of prefetched results to include full stack traces.
  • lmcache/v1/multiprocess/mq.py
    • Modified error logging in the blocking handler to capture full stack traces.
    • Enhanced error logging in the main loop for request handling exceptions to include full stack traces.
  • lmcache/v1/multiprocess/server.py
    • Changed warning logging to exception logging for errors encountered during key retrieval, ensuring stack traces are recorded.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly replaces logger.error and logger.warning with logger.exception to ensure full stack traces are logged when exceptions occur in multiprocess mode. This is a valuable improvement for debugging. My review includes suggestions to make the logging calls more idiomatic by removing the redundant exception object from the log message arguments, as logger.exception automatically appends exception information and the traceback.

Comment thread lmcache/v1/multiprocess/blend_server.py Outdated
Comment thread lmcache/v1/multiprocess/mq.py Outdated
Comment thread lmcache/v1/multiprocess/mq.py
Comment thread lmcache/v1/multiprocess/server.py
Copy link
Copy Markdown
Contributor

@ApostaC ApostaC left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fix looks good to me.
We also want to use traceback and use TracebackException to print out the stack trace when an exception happens (https://docs.python.org/3/library/traceback.html#traceback.TracebackException)

Copy link
Copy Markdown
Contributor

@royyhuang royyhuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
@KuntaiDu
Copy link
Copy Markdown
Contributor Author

@ApostaC We do not need to put the exception inside logger.exception. Also, I have did a quick test by manually injected a NotImplementedError and the stack trace looks good.

image

Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
sammshen added a commit to sammshen/LMCache that referenced this pull request Apr 21, 2026
Build LMCache#2653's diagnostic dump proved the transformers install is
correct and that isolated `from transformers import GenerationConfig`
works fine inside the failing pod. The failure only manifests through
vllm's CLI entry point.

Root cause is in vllm/entrypoints/cli/main.py itself: the module spawns
a daemon thread (`_bg_preload_torch`) that calls `import torch` and
then `import transformers` at module-scope, racing the main thread
which proceeds into main() -> vllm.entrypoints.cli.benchmark.main ->
... -> vllm.transformers_utils.config:18 ->
`from transformers import GenerationConfig, PretrainedConfig`.

On the K3s pods the race lands deterministically in a state where
transformers' _LazyModule._class_to_module cannot resolve
'GenerationConfig' (even though, as the diagnostic confirms, the
fully-initialized module contains it). A fresh local venv with
identical versions cannot reproduce it, consistent with a
timing-sensitive race. The diagnostic ran `import transformers` on
the main thread as its first action, which is exactly why it didn't
trip the race.

Fix: after `uv pip install -U vllm ...`, patch
vllm/entrypoints/cli/main.py to add `import transformers` at module
top, before the BG thread is spawned. Once transformers is already
in sys.modules with _LazyModule fully initialized, the BG thread's
`import transformers` becomes a no-op and the later
`from transformers import ...` on the main thread is just an
attribute lookup against a fully-ready module.

The patch is idempotent (marker comment prevents double-application)
and fails loudly if vllm restructures the file. Once upstream vllm
fixes this on their side, this patch block can be removed.

Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
@github-actions
Copy link
Copy Markdown

This pull request has been automatically marked as stale because it has not had activity within 60 days. It will be automatically closed if no further activity occurs within 30 days.

@github-actions github-actions Bot added the stale label Apr 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants