Skip to content

[Perf] Tune MiniMax M2 fused moe kernel on H100 GPU#18851

Merged
Kangyan-Zhou merged 1 commit intosgl-project:mainfrom
zhendonghua:tune_minimax_m2
Feb 15, 2026
Merged

[Perf] Tune MiniMax M2 fused moe kernel on H100 GPU#18851
Kangyan-Zhou merged 1 commit intosgl-project:mainfrom
zhendonghua:tune_minimax_m2

Conversation

@zhendonghua
Copy link
Copy Markdown
Contributor

Motivation

MiniMax-M2 is not tuned on fused_moe_kernel and H100 gpu, which leads to a sub-optimal performance of throughput.

This PR runs the script that tunes Triton MoE Kernel, benchmarks the decode throughput and profiles the performance of tuned fused_moe_kernel.

The fused_moe_kernel is tuned by the following instruction.

python benchmark/kernels/fused_moe_triton/tuning_fused_moe_triton.py \
    --model MiniMaxAI/MiniMax-M2 \
	--tp-size 4 \
	--dtype fp8_w8a8 \
	--disable-shared-experts-fusion \
    --tune

Modifications

Add a configuration json file python/sglang/srt/layers/moe/fused_moe_triton/configs/triton_3_5_1/E=256,N=384,device_name=NVIDIA_H100_80GB_HBM3,dtype=fp8_w8a8,block_shape=[128, 128].json

Accuracy Tests

Benchmarking and Profiling

Serve the model with

python -m sglang.launch_server \
    --model-path MiniMaxAI/MiniMax-M2 \
    --tp-size 4 \
    --tool-call-parser minimax-m2 \
    --reasoning-parser minimax-append-think \
    --trust-remote-code \
    --port 23333 \
    --mem-fraction-static 0.85

Benchmark the performance with

python3 -m sglang.bench_serving \
  --backend sglang-oai \
  --dataset-name random \
  --random-input-len 1024 \
  --random-output-len 1024 \
  --random-range-ratio 0.98 \
  --num-prompts 80 \
  --max-concurrency 16 \
  --port 23333

The output is as depicted below. The output throughput is 1008 token/s. Without the tuned kernel, the output throughput is 960 token/s. It brings about 5% speedup.
Screenshot 2026-02-14 at 7 55 28 PM

Profile with python3 -m sglang.bench_serving --dataset-name random --backend sglang-oai --random-range-ratio 1 --random-input-len 1200 --random-output-len 20 --max-concurrency 1 --num-prompts 5 --port 23333 --profile.

The performance is as depicted below. The fused_moe_kernel takes ~12 us. The un-tuned kernel takes ~15.5 us
Screenshot 2026-02-14 at 6 16 26 PM

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zhendonghua, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the inference performance of the MiniMax-M2 model when deployed on H100 GPUs. It addresses a sub-optimal throughput issue by integrating a newly tuned configuration for the fused_moe_kernel, leading to measurable improvements in both overall throughput and kernel-specific latency.

Highlights

  • Performance Tuning: Introduced a new configuration file for the fused_moe_kernel specifically tuned for the MiniMax-M2 model on H100 GPUs, addressing sub-optimal performance.
  • Throughput Improvement: Achieved a 5% speedup in decode throughput for the MiniMax-M2 model, increasing from 960 token/s to 1008 token/s with the tuned kernel.
  • Kernel Latency Reduction: Reduced the fused_moe_kernel latency from approximately 15.5 us to 12 us, demonstrating improved efficiency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/moe/fused_moe_triton/configs/triton_3_5_1/E=256,N=384,device_name=NVIDIA_H100_80GB_HBM3,dtype=fp8_w8a8,block_shape=[128, 128].json
    • Added a new JSON configuration file containing optimized kernel parameters for the fused MoE Triton kernel, specifically for the MiniMax-M2 model on H100 GPUs with FP8_W8A8 dtype.
Activity
  • Tuned the Triton MoE Kernel using a dedicated script with specified parameters for MiniMax-M2 and H100.
  • Benchmarked the decode throughput of the MiniMax-M2 model with the new kernel configuration, showing a 5% speedup.
  • Profiled the performance of the tuned fused_moe_kernel to measure latency improvements, reducing it to ~12 us.
  • Ensured code formatting adheres to project guidelines.
  • Provided detailed accuracy and speed benchmark results in the PR description.
  • Confirmed adherence to SGLang code style guidance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization for the MiniMax-M2 model on H100 GPUs by adding a new pre-tuned configuration file for the fused MoE Triton kernel. The PR description clearly outlines the tuning process and provides benchmark results demonstrating a ~5% throughput improvement, which is a valuable contribution. The new configuration file is well-structured and follows the existing conventions within the project. The change is self-contained and looks good.

@Qiaolin-Yu Qiaolin-Yu added the ready-to-merge The PR is ready to merge after the CI is green. label Feb 15, 2026
@Kangyan-Zhou Kangyan-Zhou merged commit 922fbc2 into sgl-project:main Feb 15, 2026
56 of 65 checks passed
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-to-merge The PR is ready to merge after the CI is green.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants