Skip to content

[RL] Add an nvfp4 online input scale mode#18012

Closed
zianglih wants to merge 17 commits intosgl-project:mainfrom
zianglih:nvfp4-override
Closed

[RL] Add an nvfp4 online input scale mode#18012
zianglih wants to merge 17 commits intosgl-project:mainfrom
zianglih:nvfp4-override

Conversation

@zianglih
Copy link
Copy Markdown
Contributor

@zianglih zianglih commented Jan 31, 2026

Motivation

@HumansAnd

Miles NVFP4 RL: radixark/miles#546

NVFP4 quantization recipe has a two-level micro-block scaling strategy for both input activations and model weight. For inference efficiency, the 2nd level fp32 input scale is usually calibrated and statically stored in model checkpoint.

However, for RL, the model is under constant training and the dynamic range of input activation changes dynamically. One option is to compute the scale on the fly.

Alternatively, this mode might be useful for better nvfp4 quantization quality at the cost of some performance overhead.

Modifications

  • Added SGLANG_NVFP4_ONLINE_INPUT_SCALE: input scales are computed per batch from activations, while weight scales remain checkpoint-derived.
  • Implemented stateless online-input-scale updates across NVFP4 paths (ModelOpt linear/MoE, cutlass_moe_fp4, FlashInferFP4MoE, compressed-tensors linear/MoE): rebuild alpha/scale tensors from current-batch activations + fixed checkpoint weight scales, with no batch-to-batch carryover.
  • Updated standard and flashinfer token dispatchers to write current-batch input_scale_inv into input_global_scale in online mode, ensuring downstream kernels use in-flight scales.
  • Documented the TRTLLM FP4 MoE repack path as one-way: /update_weights_from_disk is currently unsupported there without a broader refactor.

Accuracy Tests

Serving existing nvfp4 checkpoint

Qwen3-30B-A3B

unset SGLANG_NVFP4_ONLINE_INPUT_SCALE
python -m sglang.launch_server --kv-cache-dtype bf16 --model nvidia/Qwen3-30B-A3B-NVFP4 &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.940
Invalid: 0.000
Latency: 10.099 s
Output throughput: 14608.879 token/s
# Trial 2
Accuracy: 0.937
Invalid: 0.000
Latency: 9.846 s
Output throughput: 14987.632 token/s
# Trial 3
Accuracy: 0.938
Invalid: 0.000
Latency: 9.585 s
Output throughput: 15336.795 token/s

export SGLANG_NVFP4_ONLINE_INPUT_SCALE=1
python -m sglang.launch_server --kv-cache-dtype bf16 --model nvidia/Qwen3-30B-A3B-NVFP4 &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.932
Invalid: 0.000
Latency: 13.836 s
Output throughput: 10463.426 token/s
# Trial 2
Accuracy: 0.940
Invalid: 0.000
Latency: 13.750 s
Output throughput: 10715.779 token/s
# Trial 3
Accuracy: 0.940
Invalid: 0.000
Latency: 13.795 s
Output throughput: 10534.023 token/s

DeepSeek-R1-0528

unset SGLANG_NVFP4_ONLINE_INPUT_SCALE
python3 -m sglang.launch_server --kv-cache-dtype bf16 --model-path nvidia/DeepSeek-R1-0528-FP4 --tp 8 --dp 8 --enable-dp-attention &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.979
Invalid: 0.000
Latency: 30.664 s
Output throughput: 3978.382 token/s
# Trial 2
Accuracy: 0.983
Invalid: 0.000
Latency: 22.350 s
Output throughput: 5430.838 token/s
# Trial 3
Accuracy: 0.983
Invalid: 0.000
Latency: 21.023 s
Output throughput: 5754.525 token/s

export SGLANG_NVFP4_ONLINE_INPUT_SCALE=1
python3 -m sglang.launch_server --kv-cache-dtype bf16 --model-path nvidia/DeepSeek-R1-0528-FP4 --tp 8 --dp 8 --enable-dp-attention &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.988
Invalid: 0.000
Latency: 27.661 s
Output throughput: 4359.516 token/s
# Trial 2
Accuracy: 0.978
Invalid: 0.000
Latency: 25.169 s
Output throughput: 4881.184 token/s
# Trial 3
Accuracy: 0.976
Invalid: 0.000
Latency: 24.939 s
Output throughput: 4885.909 token/s

Serving PTQ checkpoint

PTQ checkpoints converted from radixark/miles#536

Qwen3-235B-A22B

# bf16 baseline
python -m sglang.launch_server --kv-cache-dtype bf16 --model Qwen/Qwen3-235B-A22B --tp 8 &
cd /sgl-workspace/sglang/
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.968
Invalid: 0.000
Latency: 24.215 s
Output throughput: 7203.702 token/s
# Trial 2
Accuracy: 0.964
Invalid: 0.000
Latency: 22.295 s
Output throughput: 7798.213 token/s
# Trial 3
Accuracy: 0.965
Invalid: 0.000
Latency: 21.229 s
Output throughput: 8205.755 token/s


# nvfp4 PTQ full
export SGLANG_NVFP4_ONLINE_INPUT_SCALE=1
python -m sglang.launch_server --kv-cache-dtype bf16 --model /data/models/Qwen3-235B-A22B-NVFP4-PTQ-full --tp 8 &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.972
Invalid: 0.000
Latency: 25.941 s
Output throughput: 6789.604 token/s
# Trial 2
Accuracy: 0.974
Invalid: 0.001
Latency: 24.445 s
Output throughput: 7287.987 token/s
# Trial 3
Accuracy: 0.974
Invalid: 0.000
Latency: 24.114 s
Output throughput: 7319.668 token/s

Qwen3-30B-A3B-Instruct-2507

# bf16 baseline
python -m sglang.launch_server --kv-cache-dtype bf16 --model Qwen/Qwen3-30B-A3B-Instruct-2507 &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.966
Invalid: 0.000
Latency: 11.764 s
Output throughput: 14510.622 token/s
# Trial 2
Accuracy: 0.967
Invalid: 0.000
Latency: 11.875 s
Output throughput: 14379.529 token/s
# Trial 3
Accuracy: 0.967
Invalid: 0.000
Latency: 11.406 s
Output throughput: 14962.158 token/s

# full nvfp4 PTQ
export SGLANG_NVFP4_ONLINE_INPUT_SCALE=1
python miles/tools/convert_hf_to_nvfp4.py --model-dir /data/models/Qwen3-30B-A3B-Instruct-2507 --save-dir /data/models/Qwen3-30B-A3B-Instruct-2507-NVFP4-PTQ-full
python -m sglang.launch_server --kv-cache-dtype bf16 --model /data/models/Qwen3-30B-A3B-Instruct-2507-NVFP4-PTQ-full &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.967
Invalid: 0.000
Latency: 14.603 s
Output throughput: 11563.085 token/s
# Trial 2
Accuracy: 0.969
Invalid: 0.000
Latency: 14.986 s
Output throughput: 11319.669 token/s
# Trial 3
Accuracy: 0.961
Invalid: 0.000
Latency: 15.052 s
Output throughput: 11204.168 token/s


# nvfp4 PTQ + keep last 8
export SGLANG_NVFP4_ONLINE_INPUT_SCALE=1
python miles/tools/convert_hf_to_nvfp4.py --keep-last-n 8 --model-dir /data/models/Qwen3-30B-A3B-Instruct-2507 --save-dir /data/models/Qwen3-30B-A3B-Instruct-2507-NVFP4-PTQ
python -m sglang.launch_server --kv-cache-dtype bf16 --model /data/models/Qwen3-30B-A3B-Instruct-2507-NVFP4-PTQ &
python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1209 --parallel 1209 --platinum
# Trial 1
Accuracy: 0.966
Invalid: 0.000
Latency: 14.068 s
Output throughput: 12021.997 token/s
# Trial 2
Accuracy: 0.965
Invalid: 0.000
Latency: 14.653 s
Output throughput: 11502.860 token/s
# Trial 3
Accuracy: 0.961
Invalid: 0.000
Latency: 14.165 s
Output throughput: 11923.289 token/s

Benchmarking and Profiling

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @zianglih, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to NVFP4 quantization by implementing an "online input scale mode." This new feature allows the system to dynamically calculate and apply input scaling factors during model execution, adapting to the characteristics of the input data. The primary goal is to improve the accuracy and efficiency of NVFP4 operations, particularly within Mixture of Experts (MoE) layers. The changes are carefully integrated across the codebase, including environment variable control, core scaling logic, and optimizations for CUDA graph compatibility, ensuring a robust and performant solution.

Highlights

  • Dynamic NVFP4 Input Scaling: Introduced an optional online input scaling mode for NVFP4 quantization, enabled via the SGLANG_NVFP4_ONLINE_SCALE environment variable. This mode dynamically computes input scales based on runtime data.
  • Enhanced Quantization Precision: Implemented a new utility function nvfp4_compute_input_scale_and_inv to calculate input scales by analyzing the maximum absolute value of input tensors, aiming to improve the precision of NVFP4 operations.
  • CUDA Graph Compatibility & Optimization: Integrated pre-allocation of buffers for dynamic scaling parameters across various MoE layers (e.g., cutlass_moe_params, compressed_tensors_moe, modelopt_quant) to prevent runtime memory reallocations and ensure smooth operation with CUDA graph capture.
  • FlashInfer Backend Adaptability: Added logic to automatically switch the FlashInfer FP4 GEMM backend to 'cutlass' during CUDA graph capture when online scaling is active, addressing potential compatibility issues with 'auto' or 'cudnn' backends.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces an online input scaling mode for nvfp4 quantization. This is a significant feature that touches multiple parts of the quantization and MoE layers. The implementation correctly adds the new environment variable and integrates the online scaling logic where needed. However, there are several instances of code duplication for the online scaling logic across different files and within the same file. Refactoring this duplicated code into shared helper functions would greatly improve the maintainability and readability of the codebase. Additionally, some of the new logic could be simplified for better clarity.

Comment thread python/sglang/srt/layers/moe/cutlass_moe.py Outdated
Comment thread python/sglang/srt/layers/moe/fused_moe_triton/layer.py Outdated
@zianglih zianglih marked this pull request as draft January 31, 2026 07:15
@zianglih zianglih marked this pull request as ready for review January 31, 2026 09:57
Comment thread python/sglang/srt/layers/moe/token_dispatcher/base.py Outdated
Comment thread python/sglang/srt/layers/moe/cutlass_moe.py Outdated
@b8zhong
Copy link
Copy Markdown
Collaborator

b8zhong commented Feb 12, 2026

@zianglih Could you fix the merge conflicts?

@zianglih
Copy link
Copy Markdown
Contributor Author

@b8zhong thanks for reviewing. Let me take a look!

@BitPhinix
Copy link
Copy Markdown

Very interested in this as well!

@zianglih
Copy link
Copy Markdown
Contributor Author

Hi @b8zhong @BitPhinix , I'll clean up this PR this week.

@zianglih zianglih marked this pull request as draft February 28, 2026 23:57
@zianglih zianglih changed the title Add an nvfp4 online input scale mode [RL] Add an nvfp4 online input scale mode Mar 1, 2026
@github-actions github-actions Bot added the documentation Improvements or additions to documentation label Mar 1, 2026
@zianglih zianglih marked this pull request as ready for review March 1, 2026 06:33
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@zianglih
Copy link
Copy Markdown
Contributor Author

zianglih commented Mar 1, 2026

Hi @b8zhong , I have refactored the implementation. Could you review again? Thanks!

@zianglih
Copy link
Copy Markdown
Contributor Author

zianglih commented Mar 1, 2026

Most NVIDIA ci passed, only 1 irrelevant failure.

@wolfcomos
Copy link
Copy Markdown
Contributor

wolfcomos commented Mar 2, 2026

This change looks very beneficial for enabling NVFP4 online quantizer. Having the compute_input_scale_and_inv function makes it much easier to reuse to add a weight quantization path.

@zianglih
Copy link
Copy Markdown
Contributor Author

zianglih commented Mar 4, 2026

Hi @b8zhong , do you plan to merge the PR? Thanks

@b8zhong b8zhong enabled auto-merge (squash) March 4, 2026 22:14
@b8zhong
Copy link
Copy Markdown
Collaborator

b8zhong commented Mar 5, 2026

@zianglih No need to merge in main anymore. I'll rerun until NV CI passes

@zianglih
Copy link
Copy Markdown
Contributor Author

zianglih commented Mar 8, 2026

Hi @b8zhong , there is 1 failed nv ci. Could you rerun? Thanks!

auto-merge was automatically disabled March 10, 2026 23:04

Head branch was pushed to by a user without write access

@zianglih
Copy link
Copy Markdown
Contributor Author

Resolved a minor merge conflict, no functional changes. Failed nv ci is irrelevant to this PR.

Zhichenzzz added a commit to Zhichenzzz/sglang that referenced this pull request Apr 4, 2026
Two features for NVFP4 RL training with miles:

1. post_process_weights endpoint: Re-runs process_weights_after_loading
   on NVFP4 layers after in-place weight updates, handling padding and
   shuffling for GEMM kernels. In RL mode (enable_memory_saver), original
   MoE weights are preserved for reloading instead of being deleted.

2. Online input scale (PR sgl-project#18012 by zianglih): Dynamically computes
   NVFP4 input_scale from per-batch activation amax at inference time,
   instead of using stale checkpoint-derived scales. Essential for RL
   where model weights change every training step. Enabled via
   SGLANG_NVFP4_ONLINE_INPUT_SCALE=1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Zhichenzzz added a commit to Zhichenzzz/sglang that referenced this pull request Apr 4, 2026
Two features for NVFP4 RL training with miles:

1. post_process_weights endpoint: Re-runs process_weights_after_loading
   on NVFP4 layers after in-place weight updates, handling padding and
   shuffling for GEMM kernels. In RL mode (enable_memory_saver), original
   MoE weights are preserved for reloading instead of being deleted.

2. Online input scale (PR sgl-project#18012 by zianglih): Dynamically computes
   NVFP4 input_scale from per-batch activation amax at inference time,
   instead of using stale checkpoint-derived scales. Essential for RL
   where model weights change every training step. Enabled via
   SGLANG_NVFP4_ONLINE_INPUT_SCALE=1.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@zianglih
Copy link
Copy Markdown
Contributor Author

zianglih commented May 1, 2026

Closing this PR, a better implementation is #22918 .

@zianglih zianglih closed this May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

blackwell SM100/SM120 documentation Improvements or additions to documentation high priority quant LLM Quantization run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants