Feature/trtllm mha workspace size configurable #15089#15131
Conversation
Summary of ChangesHello @baonudesifeizhai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a significant improvement to memory management within the TRTLLM Multi-Head Attention (MHA) backend by making its workspace size dynamically configurable. Previously fixed at 512 MB, the workspace can now be adjusted by users through an environment variable, directly addressing potential Out-Of-Memory issues when processing larger batch sizes. This enhancement provides greater flexibility and stability for users running memory-intensive inference tasks. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request makes the TRTLLM MHA workspace size configurable through an environment variable, which is a great improvement for users running large batch sizes. The overall approach of reading the configuration before the parent class initialization is correct. My feedback focuses on simplifying the implementation of the new helper function to improve code clarity and maintainability by removing redundant logic and unused variables.
|
/tag-and-rerun-ci agai |
|
is that failed test related ? |
|
/rerun-failed-ci |
|
still failed... why |
…n3_pp * 'main' of https://github.com/sgl-project/sglang: (74 commits) [bug fix][pp] fix inconsistent latency between tp (sgl-project#15379) Fix warp illegal instruction in kimi k2 thinking PCG (sgl-project#15306) Fix gpt-oss yarn with `truncate` argument (sgl-project#14270) Monkey patch deepseek-ocr's `v_head_dim` (sgl-project#15384) [model-gateway] Replace PolicyRegistry RwLock with DashMap for lock-free policy lookups (sgl-project#15361) [PP] Fix dynamic chunking strategy for PP (sgl-project#15372) Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU (sgl-project#12967) Split test_piecewise_cuda_graph.py to optimize CI resource usage (sgl-project#15290) unified management of environment variables for vlm cuda ipc transport (sgl-project#14501) Mistral Large 3 NVFP4 TRTLLM MoE support (sgl-project#15049) fix: adjust time for test_epd_disaggregation.py (sgl-project#15354) Add doc for qwen3 next (sgl-project#15337) feat: DeepSeek-V3.2 Streaming tool call output (sgl-project#15278) Feature/trtllm mha workspace size configurable sgl-project#15089 (sgl-project#15131) [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V (sgl-project#15205) [Deepseek V3.2] Support Overlap Spec + NSA (sgl-project#15307) Add request-level timestamp for when prefill finishes (sgl-project#14860) [CI] Migrate LoRA tests to test/registered/lora/ (sgl-project#15176) Reserve more memory for DeepSeekOCR model and adjust server start timeout for DeepGEMM to reduce flakiness (sgl-project#15277) Fix condition check for require_gathered_buffer (sgl-project#15328) ...

Motivation
Fix #15089
Make TRTLLM MHA workspace size configurable via SGLANG_FLASHINFER_WORKSPACE_SIZE environment variable instead of hardcoded 512 MB. This allows users to adjust workspace size for large batch sizes to avoid OOM errors.
Modifications
Added _get_trtllm_mha_workspace_size() helper function to handle workspace size configuration
Modified TRTLLMHAAttnBackend.init() to use configurable workspace size
Preserves user's environment variable setting even if it gets overridden by FlashInferAttnBackend.init()
Default value remains 512 MB for backward compatibility
Accuracy Tests
Benchmarking and Profiling
running

export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((2048 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000
(when adding log info)
unset SGLANG_FLASHINFER_WORKSPACE_SIZE
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000
export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((1024 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000
export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((2048 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000
Checklist