[UX] Log full stack trace upon exception in MP mode#2653
[UX] Log full stack trace upon exception in MP mode#2653KuntaiDu wants to merge 3 commits intoLMCache:devfrom
Conversation
Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the debugging capabilities of LMCache's multiprocessing mode by ensuring that all exceptions are logged with their complete stack traces. This change provides more comprehensive error information, making it easier to diagnose and resolve issues that arise during runtime in the distributed environment. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request correctly replaces logger.error and logger.warning with logger.exception to ensure full stack traces are logged when exceptions occur in multiprocess mode. This is a valuable improvement for debugging. My review includes suggestions to make the logging calls more idiomatic by removing the redundant exception object from the log message arguments, as logger.exception automatically appends exception information and the traceback.
ApostaC
left a comment
There was a problem hiding this comment.
The fix looks good to me.
We also want to use traceback and use TracebackException to print out the stack trace when an exception happens (https://docs.python.org/3/library/traceback.html#traceback.TracebackException)
Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
|
@ApostaC We do not need to put the exception inside
|
Signed-off-by: KuntaiDu <kuntai@uchicago.edu>
Build LMCache#2653's diagnostic dump proved the transformers install is correct and that isolated `from transformers import GenerationConfig` works fine inside the failing pod. The failure only manifests through vllm's CLI entry point. Root cause is in vllm/entrypoints/cli/main.py itself: the module spawns a daemon thread (`_bg_preload_torch`) that calls `import torch` and then `import transformers` at module-scope, racing the main thread which proceeds into main() -> vllm.entrypoints.cli.benchmark.main -> ... -> vllm.transformers_utils.config:18 -> `from transformers import GenerationConfig, PretrainedConfig`. On the K3s pods the race lands deterministically in a state where transformers' _LazyModule._class_to_module cannot resolve 'GenerationConfig' (even though, as the diagnostic confirms, the fully-initialized module contains it). A fresh local venv with identical versions cannot reproduce it, consistent with a timing-sensitive race. The diagnostic ran `import transformers` on the main thread as its first action, which is exactly why it didn't trip the race. Fix: after `uv pip install -U vllm ...`, patch vllm/entrypoints/cli/main.py to add `import transformers` at module top, before the BG thread is spawned. Once transformers is already in sys.modules with _LazyModule fully initialized, the BG thread's `import transformers` becomes a no-op and the later `from transformers import ...` on the main thread is just an attribute lookup against a fully-ready module. The patch is idempotent (marker comment prevents double-application) and fails loudly if vllm restructures the file. Once upstream vllm fixes this on their side, this patch block can be removed. Signed-off-by: Samuel Shen <slshen@tensormesh.ai>
|
This pull request has been automatically marked as stale because it has not had activity within 60 days. It will be automatically closed if no further activity occurs within 30 days. |

What this PR does / why we need it:
Currently LMCache MP mode will not log full stack trace. This PR fixes that.
Special notes for your reviewers:
If applicable: