Skip to content

Add KV4-capable backend flashmla and update server args#14989

Merged
Fridge003 merged 1 commit intosgl-project:mainfrom
bytedance-iaas:horenc/kv4_backend_compat_on_main_sm90_release
Dec 12, 2025
Merged

Add KV4-capable backend flashmla and update server args#14989
Fridge003 merged 1 commit intosgl-project:mainfrom
bytedance-iaas:horenc/kv4_backend_compat_on_main_sm90_release

Conversation

@JackChuang
Copy link
Copy Markdown
Contributor

Motivation

#14467 only tested on SM100 and thus didn't test flashmla and fa3. This PR adds the attention backend, flashmla, that can run with FP4 KV cache on SM90.

  • Added the new KV4-capable backend flashmla and documented it in attention_backend.md
  • Updated server_args.py accordingly

Experiments

Tested flashmla and fa3 on H20 with DeepSeek-R1-W4AFP8 (MLA) and Qwen3-235B-A22B (MHA).

Checklist

- Added the new KV4-capable backend flashmla and documented it in attention_backend.md
- Updated server_args.py accordingly

Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com>
@github-actions github-actions Bot added the documentation Improvements or additions to documentation label Dec 12, 2025
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @JackChuang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request expands the attention backend capabilities by integrating flashmla with support for FP4 KV cache on SM90. It ensures that flashmla is properly recognized as a KV4-compatible backend in the server configuration and updates the documentation to reflect this new capability, enhancing the system's performance and compatibility with advanced hardware.

Highlights

  • New Backend Support: Introduced flashmla as a KV4-capable attention backend, specifically for running with FP4 KV cache on SM90, addressing a gap in previous testing.
  • Configuration Update: Modified server_args.py to correctly list flashmla among the KV4-compatible attention MLA backends, ensuring proper system recognition.
  • Documentation Enhancement: Updated attention_backend.md to reflect FlashMLA's newly added support for FP4 KV Cache, providing clear information on its capabilities.
  • Validation: Tested flashmla and fa3 on H20 with specific models (DeepSeek-R1-W4AFP8 and Qwen3-235B-A22B) to ensure functionality and performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly adds support for the flashmla attention backend with FP4 KV cache, including documentation and server argument updates. The changes are well-aligned with the PR's goal. I have one minor suggestion to improve code maintainability by sorting the list of backend choices.

I noticed that the new file for the flashmla backend and its registration were not in the provided diff, so I couldn't perform a line-by-line review. However, from a high-level look, the implementation seems consistent with existing backends.

It's also worth noting that the checklist indicates unit tests are missing. Adding tests for the new backend would be valuable for ensuring long-term stability and correctness.

Comment thread python/sglang/srt/server_args.py
@Fridge003
Copy link
Copy Markdown
Collaborator

/tag-and-rerun-ci

@Fridge003 Fridge003 merged commit 171b442 into sgl-project:main Dec 12, 2025
82 of 144 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 13, 2025
…n_eagle3_npu

* 'main' of https://github.com/sgl-project/sglang: (121 commits)
  Super tiny add gsp-fast-prepare (sgl-project#14992)
  Super tiny fix confusing slash_command_handler hint (sgl-project#14976)
  Super tiny remove unused argument (sgl-project#14966)
  [registry] Add a strict mode to model registration (sgl-project#14933)
  Feature/Fix multi lora scheduler blocking issue and evict LoRA None lastly (sgl-project#14795)
  Tune triton fused moe for the case of glm-4.6-fp8 b200 tp4 (sgl-project#15020)
  [model-gateway] refactor: unify worker management into modular workflow structure (sgl-project#15010)
  Update ci permission (sgl-project#15014)
  Refactor of http and engine entrypoints to allow custom override  (sgl-project#14869)
  Add KV4-capable backend flashmla and update server args (sgl-project#14989)
  Revert several PRs (sgl-project#14958)
  Super tiny extract route_typed_request_once (sgl-project#14951)
  Fix CI by reverting incorrect metric check logic (sgl-project#15004)
  [model-gateway] refactor: workflow engine cleanup and minor optimization (sgl-project#15001)
  [model-gateway] fix: handle workflow deadlock and optimize cycle detection (sgl-project#15000)
  [model-gateway] feat: add DAG parallel execution support and workflow optimization (sgl-project#14999)
  [model-gateway] refactor: extract workflow engine to src/workflow module (sgl-project#14996)
  Update CODEOWNERS for multimodal_gen (sgl-project#14995)
  [diffusion] docker: Tiny fix Docker Hub link in installation documentation (sgl-project#14987)
  [PD] Add decode PP event loop for PD disaggregation (sgl-project#14945)
  ...

# Conflicts:
#	python/sglang/srt/model_executor/piecewise_cuda_graph_runner.py
Prozac614 pushed a commit to Prozac614/sglang that referenced this pull request Dec 17, 2025
…14989)

Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com>
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
…14989)

Signed-off-by: Ho-Ren (Jack) Chuang <horenchuang@bytedance.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants