Skip to content

[Feature] Fuse mrope all in 1 kernel#14906

Merged
Fridge003 merged 5 commits intosgl-project:mainfrom
DarkSharpness:feat_fuse_mrope
Dec 15, 2025
Merged

[Feature] Fuse mrope all in 1 kernel#14906
Fridge003 merged 5 commits intosgl-project:mainfrom
DarkSharpness:feat_fuse_mrope

Conversation

@DarkSharpness
Copy link
Copy Markdown
Collaborator

Motivation

Same as #13199.

Modifications

Further fuse the all mrope op into 1. Not 100% sure we have the best performance, but should be better than original implementation.

Accuracy Tests

Benchmarking and Profiling

# server
python3 -m sglang.launch_server --model-path Qwen/Qwen3-VL-30B-A3B-Thinking \
    --context-length 262144 --cuda-graph-max-bs 64 --reasoning-parser deepseek-r1 --disable-radix
# client
python -m sglang.test.send_one --profile

Before:

Decode throughput: 177 token/s
image

After:

Decode throughput: 189 token/s
image

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant optimization to the multimodal Rotary Positional Embedding (mRoPE) implementation by consolidating several operations into a single, fused Triton kernel. This change aims to improve the overall efficiency and speed of the model's inference process, particularly in terms of decode throughput, by reducing computational overhead and enhancing data locality within the GPU. The modifications simplify the underlying API and integrate seamlessly into the existing MRotaryEmbedding structure.

Highlights

  • Performance Optimization: The primary goal of this pull request is to enhance the performance of multimodal Rotary Positional Embeddings (mRoPE) by fusing multiple operations into a single Triton kernel. This optimization has resulted in an increase in decode throughput from 177 token/s to 189 token/s, representing a ~6.7% improvement.
  • Triton Kernel Refactoring: The _triton_mrope_forward kernel has been refactored and renamed to _triton_mrope_forward_fused. It now directly accepts a combined cos_sin_cache_ptr and positions_ptr, simplifying data access and reducing overhead by eliminating the need for separate cos and sin tensors and num_tokens.
  • API Simplification: The Python-level triton_mrope function has been renamed to triton_mrope_fused and its signature updated to align with the new fused kernel. It now performs operations in-place, indicated by its None return type, and the triton_mrope_wrapper function has been removed.
  • Integration into MRotaryEmbedding: The _forward_triton method within the MRotaryEmbedding class has been streamlined. It now directly calls the new triton_mrope_fused function, removing previous logic for handling cos, sin, and tensor reshaping, leading to cleaner and more efficient code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the multimodal rotary positional embedding (mrope) operation by fusing it into a single Triton kernel. This change simplifies the Python code by offloading the complex logic to the kernel and, as shown in the benchmarks, improves performance. The changes are well-implemented. I have a couple of minor suggestions to further improve code quality and readability.

Comment thread python/sglang/srt/layers/rotary_embedding.py Outdated
Comment thread python/sglang/srt/layers/rotary_embedding.py
@yuan-luo yuan-luo self-requested a review December 12, 2025 03:21
@yuan-luo
Copy link
Copy Markdown
Collaborator

Could you please paste the test_mrope.py result?

@DarkSharpness
Copy link
Copy Markdown
Collaborator Author

all passed @yuan-luo
image

@yuan-luo
Copy link
Copy Markdown
Collaborator

all passed @yuan-luo image

Awesome. Thanks.

Copy link
Copy Markdown
Collaborator

@yuan-luo yuan-luo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

DarkSharpness and others added 3 commits December 13, 2025 21:23
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@Fridge003
Copy link
Copy Markdown
Collaborator

@DarkSharpness
Copy link
Copy Markdown
Collaborator Author

fixed by bypassing shape check for positions @Fridge003

pass local test on H200:

python3 -m sglang.launch_server --model-path "Qwen/Qwen2.5-VL-7B-Instruct" \
    --enable-piecewise-cuda-graph --piecewise-cuda-graph-compiler eager --disable-radix-cache --load-format dummy

@Fridge003 Fridge003 merged commit f03bfa4 into sgl-project:main Dec 15, 2025
221 of 234 checks passed
Liwansi added a commit to iforgetmyname/sglang that referenced this pull request Dec 15, 2025
…n_eagle3_npu

* 'main' of https://github.com/sgl-project/sglang: (89 commits)
  [model-gateway] Remove legacy RouterMetrics and Rename SmgMetrics to Metrics and smg_labels to metrics_labels (sgl-project#15160)
  [diffusion] fix: fix video model sp when resolution is not specified (sgl-project#15047)
  [diffusion] fix: fix pytorch non-writable array warning (sgl-project#15017)
  [diffusion] fix: cache dit with parallel (sgl-project#15163)
  chore: change npu pr-test a2 runner (sgl-project#15152)
  [Feature] Fuse mrope all in 1 kernel (sgl-project#14906)
  Fix num running requests (load) wrong cleared for ongoing requests (sgl-project#15116)
  Fused two elementwise kernels for k_nope and k_pe concat (sgl-project#14862)
  fix: adding date and fixing release name issue (sgl-project#15174)
  [CPU] Add Gemma3RMSNorm kernel in sgl-kernel and add ut (sgl-project#9324)
  feature: PR wheel (sgl-project#15170)
  [diffusion] model: support mutli-image input and qwen-image-edit-2509 (sgl-project#15005)
  fix CompressedTensorsW8A8Int8 min_capability (sgl-project#13914)
  Tiny improve summary text in `bench_one_batch_server.py` (sgl-project#15158)
  [model-gateway] add mcp and discovery metrics (sgl-project#15156)
  fix: move ci-bot (sgl-project#15154)
  Fix import warnings (sgl-project#15144)
  ci: adding errors to Github summary (sgl-project#14778)
  [model-gateway] Add streaming metrics for harmony gRPC router (sgl-project#15147)
  [model-gateway] upgrade axum and axum server (sgl-project#15146)
  ...

# Conflicts:
#	python/sglang/srt/server_args.py
tonyluj pushed a commit to openanolis/sglang that referenced this pull request Dec 17, 2025
@DarkSharpness DarkSharpness deleted the feat_fuse_mrope branch December 18, 2025 17:23
YChange01 pushed a commit to YChange01/sglang that referenced this pull request Jan 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants