Skip to content

[Piecewise CUDA Graph] Support ModelOpt FP4#13101

Merged
ispobock merged 16 commits intosgl-project:mainfrom
bzhng-development:brayden/register-fp4-mm
Nov 16, 2025
Merged

[Piecewise CUDA Graph] Support ModelOpt FP4#13101
ispobock merged 16 commits intosgl-project:mainfrom
bzhng-development:brayden/register-fp4-mm

Conversation

@b8zhong
Copy link
Copy Markdown
Collaborator

@b8zhong b8zhong commented Nov 12, 2025

TORCHDYNAMO_VERBOSE=1 python3 -m sglang.launch_server --model-path nvidia/Llama-3.1-8B-Instruct-FP4 --quantization modelopt_fp4 --model-loader-extra-config '{"enable_multithread_load": true, "num_threads": 8}' --kv-cache-dtype bfloat16 --enable-piecewise-cuda-graph --piecewise-cuda-graph-max-tokens 65536
curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input-len 1024 --random-output-len 16 --random-range-ratio 1.0 --num-prompts 16   --max-concurrency 1  --output-file res_before.jsonl

curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input-len 1024 --random-output-len 16 --random-range-ratio 1.0 --num-prompts 128  --max-concurrency 4  --output-file res_before.jsonl

curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input-len 1024 --random-output-len 16 --random-range-ratio 1.0 --num-prompts 256 --max-concurrency 16 --output-file res_before.jsonl

curl http://127.0.0.1:30000/flush_cache
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input-len 1024 --random-output-len 16 --random-range-ratio 1.0 --num-prompts 512 --max-concurrency 32 --output-file res_before.jsonl

With:

+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|    |   max_concurrency |   input_throughput |   output_throughput |   mean_ttft_ms |   median_ttft_ms |   p99_ttft_ms |   mean_tpot_ms |   median_tpot_ms |   p99_tpot_ms |   per_user_throughput |
+====+===================+====================+=====================+================+==================+===============+================+==================+===============+=======================+
|  0 |             1.000 |          14577.392 |             227.772 |         16.724 |           16.950 |        17.871 |          3.509 |            3.510 |         3.524 |               227.772 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  1 |             4.000 |          40186.674 |             627.917 |         37.188 |           39.206 |        88.059 |          4.220 |            4.045 |         5.260 |               156.979 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  2 |            16.000 |          82977.666 |            1296.526 |         94.930 |           95.574 |       136.002 |          6.522 |            5.931 |        10.390 |                81.033 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  3 |            32.000 |         100562.743 |            1571.293 |        171.090 |          179.324 |       275.122 |          9.743 |            9.029 |        18.928 |                49.103 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+

Without:

+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|    |   max_concurrency |   input_throughput |   output_throughput |   mean_ttft_ms |   median_ttft_ms |   p99_ttft_ms |   mean_tpot_ms |   median_tpot_ms |   p99_tpot_ms |   per_user_throughput |
+====+===================+====================+=====================+================+==================+===============+================+==================+===============+=======================+
|  0 |             1.000 |          13911.476 |             217.367 |         19.747 |           19.688 |        23.477 |          3.532 |            3.530 |         3.556 |               217.367 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  1 |             4.000 |          31418.749 |             490.918 |         69.319 |           43.535 |       906.381 |          3.972 |            3.811 |         4.560 |               122.729 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  2 |            16.000 |          83809.124 |            1309.518 |        104.148 |          116.887 |       135.948 |          5.790 |            4.948 |        10.429 |                81.845 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
|  3 |            32.000 |         103002.783 |            1609.418 |        178.333 |          163.975 |       247.206 |          8.782 |            9.148 |        16.060 |                50.294 |
+----+-------------------+--------------------+---------------------+----------------+------------------+---------------+----------------+------------------+---------------+-----------------------+
Metric Concurrency Before After Δ (%)
Input Throughput 1 13911.48 14577.39 +4.8%
4 31418.75 40186.67 +27.9%
16 83809.12 82977.67 -1.0%
32 103002.78 100562.74 -2.4%
Output Throughput 1 217.37 227.77 +4.8%
4 490.92 627.92 +27.9%
16 1309.52 1296.53 -1.0%
32 1609.42 1571.29 -2.4%
Mean TTFT (ms) 1 19.75 16.72 -15.3%
4 69.32 37.19 -46.3%
16 104.15 94.93 -8.9%
32 178.33 171.09 -4.1%
Mean TPOT (ms) 1 3.53 3.51 -0.6%
4 3.97 4.22 +6.3%
16 5.79 6.52 +12.6%
32 8.78 9.74 +10.9%
Per-user Throughput 1 217.37 227.77 +4.8%
4 122.73 156.98 +27.9%
16 81.85 81.03 -1.0%
32 50.29 49.10 -2.4%

Acc:

Before:

python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 500
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:12<00:00, 107.75it/s]
Accuracy: 0.726
Invalid: 0.003
Latency: 12.293 s
Output throughput: 10408.519 token/s

After:

python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 500
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:09<00:00, 139.28it/s]
Accuracy: 0.719
Invalid: 0.002
Latency: 9.492 s
Output throughput: 13838.168 token/s

@github-actions github-actions Bot added the quant LLM Quantization label Nov 12, 2025
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances SGLang's capabilities by integrating ModelOpt FP4 quantization with its piecewise CUDA graph functionality. The core change involves defining a custom PyTorch operator for FP4 GEMM, which allows these quantized operations to be effectively captured and optimized by CUDA graphs. This integration aims to improve performance, as evidenced by the provided benchmarks showing better throughput and reduced latency, particularly at lower concurrency levels, while maintaining comparable accuracy.

Highlights

  • ModelOpt FP4 Integration: Introduces support for ModelOpt FP4 quantization within the SGLang framework, specifically for use with piecewise CUDA graphs.
  • Custom PyTorch Operator: Wraps the FP4 GEMM operation in a custom torch.library.custom_op (sglang::fp4_gemm) to enable better graph capture and optimization.
  • Performance Improvements: Benchmarks demonstrate notable improvements in input/output throughput and reduced Time To First Token (TTFT) at lower concurrencies (e.g., +27.9% output throughput and -46.3% mean TTFT at 4 concurrency).
  • Accuracy Impact: Shows a minor change in accuracy (from 0.726 to 0.719) with a significant improvement in output throughput (from 10408.519 to 13838.168 token/s) in the GSM8K benchmark.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for ModelOpt FP4 quantization with Piecewise CUDA Graph. It achieves this by wrapping the FP4 GEMM and quantization kernels as torch.library custom ops, complete with fake implementations for torch.compile. This is a robust approach for integrating custom kernels with CUDA graphs and should yield performance benefits. The implementation is clean and correct. I have one minor suggestion for code cleanup.

Comment on lines 1119 to 1121
backend = (
FLASHINFER_FP4_GEMM_BACKEND if FLASHINFER_FP4_GEMM_BACKEND else "cutlass"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The backend variable is defined here but is no longer used since the backend selection logic has been moved inside the _sglang_fp4_gemm custom op. This unused code can be removed to improve clarity.

@b8zhong
Copy link
Copy Markdown
Collaborator Author

b8zhong commented Nov 13, 2025

Writing report to /tmp/mgsm_en_nvidia_Llama-3.1-8B-Instruct-FP4.html
{'en': np.float64(0.788), 'en:std': np.float64(0.40872484632084705), 'group_latin': np.float64(0.788), 'group_latin:std': np.float64(0.40872484632084705), 'score:std': np.float64(0.40872484632084705), 'score': np.float64(0.788)}
Writing results to /tmp/mgsm_en_nvidia_Llama-3.1-8B-Instruct-FP4.json
Total latency: 11.785 s
Score: 0.788
MGSM Accuracy: 0.788
.

========================================================================================== warnings summary ===========================================================================================
../../usr/local/lib/python3.12/dist-packages/_pytest/config/__init__.py:1474
  /usr/local/lib/python3.12/dist-packages/_pytest/config/__init__.py:1474: PytestConfigWarning: Unknown config option: asyncio_mode
  
    self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================== 1 passed, 1 warning in 87.25s (0:01:27) ===============================================================================

@ispobock ispobock merged commit 24a25ff into sgl-project:main Nov 16, 2025
236 of 257 checks passed
@b8zhong b8zhong deleted the brayden/register-fp4-mm branch November 19, 2025 23:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants