Skip to content

Feature/trtllm mha workspace size configurable #15089#15131

Merged
Fridge003 merged 12 commits intosgl-project:mainfrom
baonudesifeizhai:feature/trtllm-mha-workspace-size-configurable
Dec 18, 2025
Merged

Feature/trtllm mha workspace size configurable #15089#15131
Fridge003 merged 12 commits intosgl-project:mainfrom
baonudesifeizhai:feature/trtllm-mha-workspace-size-configurable

Conversation

@baonudesifeizhai
Copy link
Copy Markdown
Contributor

@baonudesifeizhai baonudesifeizhai commented Dec 14, 2025

Motivation

Fix #15089
Make TRTLLM MHA workspace size configurable via SGLANG_FLASHINFER_WORKSPACE_SIZE environment variable instead of hardcoded 512 MB. This allows users to adjust workspace size for large batch sizes to avoid OOM errors.

Modifications

Added _get_trtllm_mha_workspace_size() helper function to handle workspace size configuration

Modified TRTLLMHAAttnBackend.init() to use configurable workspace size

Preserves user's environment variable setting even if it gets overridden by FlashInferAttnBackend.init()

Default value remains 512 MB for backward compatibility

Accuracy Tests

Benchmarking and Profiling

running
export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((2048 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000
(when adding log info)
image

unset SGLANG_FLASHINFER_WORKSPACE_SIZE
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000

image

export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((1024 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000

image

export SGLANG_FLASHINFER_WORKSPACE_SIZE=$((2048 * 1024 * 1024))
python -m sglang.launch_server
--model-path Qwen/Qwen2.5-7B-Instruct
--attention-backend trtllm_mha
--port 30000

image

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @baonudesifeizhai, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant improvement to memory management within the TRTLLM Multi-Head Attention (MHA) backend by making its workspace size dynamically configurable. Previously fixed at 512 MB, the workspace can now be adjusted by users through an environment variable, directly addressing potential Out-Of-Memory issues when processing larger batch sizes. This enhancement provides greater flexibility and stability for users running memory-intensive inference tasks.

Highlights

  • Configurable Workspace Size: The TRTLLM MHA workspace size is now configurable via the SGLANG_FLASHINFER_WORKSPACE_SIZE environment variable, moving away from a hardcoded 512 MB limit.
  • OOM Prevention: This change allows users to adjust the workspace size, which is crucial for handling large batch sizes and preventing Out-Of-Memory (OOM) errors.
  • Environment Variable Preservation: A new helper function ensures that the user's SGLANG_FLASHINFER_WORKSPACE_SIZE setting is preserved, even if it might otherwise be overridden by the FlashInferAttnBackend.init() method.
  • Backward Compatibility: The default workspace size remains 512 MB, ensuring backward compatibility for existing setups.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request makes the TRTLLM MHA workspace size configurable through an environment variable, which is a great improvement for users running large batch sizes. The overall approach of reading the configuration before the parent class initialization is correct. My feedback focuses on simplifying the implementation of the new helper function to improve code clarity and maintainability by removing redundant logic and unused variables.

Comment thread python/sglang/srt/layers/attention/trtllm_mha_backend.py Outdated
Comment thread python/sglang/srt/layers/attention/trtllm_mha_backend.py Outdated
Copy link
Copy Markdown
Collaborator

@b8zhong b8zhong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just do an inline check, instead of a seperate function

@baonudesifeizhai
Copy link
Copy Markdown
Contributor Author

image also work after change back to inline

@b8zhong
Copy link
Copy Markdown
Collaborator

b8zhong commented Dec 15, 2025

/tag-and-rerun-ci agai

@b8zhong b8zhong enabled auto-merge (squash) December 15, 2025 04:04
@baonudesifeizhai
Copy link
Copy Markdown
Contributor Author

is that failed test related ?

@zhaochenyang20
Copy link
Copy Markdown
Collaborator

/rerun-failed-ci

@baonudesifeizhai
Copy link
Copy Markdown
Contributor Author

still failed... why

@Fridge003 Fridge003 disabled auto-merge December 18, 2025 01:07
@Fridge003 Fridge003 merged commit 891ee82 into sgl-project:main Dec 18, 2025
142 of 150 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 19, 2025
…n3_pp

* 'main' of https://github.com/sgl-project/sglang: (74 commits)
  [bug fix][pp] fix inconsistent latency between tp (sgl-project#15379)
  Fix warp illegal instruction in kimi k2 thinking PCG (sgl-project#15306)
  Fix gpt-oss yarn with `truncate` argument (sgl-project#14270)
  Monkey patch deepseek-ocr's `v_head_dim` (sgl-project#15384)
  [model-gateway] Replace PolicyRegistry RwLock with DashMap for lock-free policy lookups (sgl-project#15361)
  [PP] Fix dynamic chunking strategy for PP (sgl-project#15372)
  Fix issue: ENABLE_BELOW_SM90 cannot be enabled on aarch64 CPU (sgl-project#12967)
  Split test_piecewise_cuda_graph.py to optimize CI resource usage (sgl-project#15290)
  unified management of environment variables for vlm cuda ipc transport  (sgl-project#14501)
  Mistral Large 3 NVFP4 TRTLLM MoE support (sgl-project#15049)
  fix: adjust time for test_epd_disaggregation.py (sgl-project#15354)
  Add doc for qwen3 next (sgl-project#15337)
  feat: DeepSeek-V3.2 Streaming tool call output (sgl-project#15278)
  Feature/trtllm mha workspace size configurable sgl-project#15089 (sgl-project#15131)
  [VLM] Support cos sin cache for Qwen3-VL & GLM-4.1V (sgl-project#15205)
  [Deepseek V3.2] Support Overlap Spec + NSA (sgl-project#15307)
  Add request-level timestamp for when prefill finishes (sgl-project#14860)
  [CI] Migrate LoRA tests to test/registered/lora/ (sgl-project#15176)
  Reserve more memory for DeepSeekOCR model and adjust server start timeout for DeepGEMM to reduce flakiness (sgl-project#15277)
  Fix condition check for require_gathered_buffer (sgl-project#15328)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Make TRTLLM MHA workspace size configurable instead of hardcoded

4 participants