Skip to content

[Qwen3.5] Enable nvfp4 checkpoint#18937

Merged
ispobock merged 1 commit intosgl-project:mainfrom
hlu1:qwen
Feb 19, 2026
Merged

[Qwen3.5] Enable nvfp4 checkpoint#18937
ispobock merged 1 commit intosgl-project:mainfrom
hlu1:qwen

Conversation

@hlu1
Copy link
Copy Markdown
Collaborator

@hlu1 hlu1 commented Feb 17, 2026

Motivation

Enable nvfp4 checkpoint

Modifications

Disable quantization for the Linear, Global attention modules, visual model, and MTP layer.

Accuracy Tests

python3 -m sglang.test.run_eval --port 30000 --eval-name gsm8k --num-examples 1319 --max-tokens 16000 --repeat 1 --num-threads 1319 --num-shots 5

No MTP: 0.960
MTP3: 0.969, acceptance len: 3.47

Checklist

Review Process

  1. Ping Merge Oncalls to start the PR flow. See the PR Merge Process.
  2. Get approvals from CODEOWNERS and other reviewers.
  3. Trigger CI tests with comments or contact authorized users to do so.
    • /tag-run-ci-label, /rerun-failed-ci, /tag-and-rerun-ci
  4. After green CI and required approvals, ask Merge Oncalls to merge.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @hlu1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for nvfp4 checkpoints in Qwen3.5 models. It achieves this by selectively disabling modelopt_fp4 quantization for key components such as linear attention, QKV projections, the visual model, and the MTP layer. This ensures proper loading and functionality of models utilizing the nvfp4 format, optimizing for performance and memory efficiency.

Highlights

  • NVFP4 Checkpoint Support: Enabled support for nvfp4 checkpoints in Qwen3.5 models, allowing for efficient loading and execution.
  • Selective Quantization Disablement: Disabled modelopt_fp4 quantization for specific modules, including Linear, Global attention, the visual model, and the MTP layer, to ensure compatibility with nvfp4 checkpoints.
  • Improved Error Handling: Enhanced the error message in rotary_embedding.py to provide more context when an unknown RoPE scaling type is encountered.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • python/sglang/srt/layers/rotary_embedding.py
    • Enhanced the error message for unknown RoPE scaling types to include the rope_scaling configuration.
  • python/sglang/srt/models/qwen3_5.py
    • Introduced logic to disable modelopt_fp4 quantization for the linear_attn module.
    • Introduced logic to disable modelopt_fp4 quantization for the qkv_proj and output projection within the attention mechanism.
    • Removed .weight_scale and .input_scale from the list of ignored suffixes during weight loading.
    • Refined the condition for identifying fused expert parameters by using endswith for more precise matching.
    • Updated a comment to reflect that extra parameters are skipped for GPTQ models, not GPTQ/modelopt models.
  • python/sglang/srt/models/qwen3_5_mtp.py
    • Added a check to disable modelopt_fp4 quantization for the MTP model when nvfp4 checkpoints are used.
  • python/sglang/srt/models/qwen3_vl.py
    • Implemented a mechanism to disable modelopt_fp4 quantization for the visual model when nvfp4 checkpoints are active.
Activity
  • The author, hlu1, created this pull request to enable nvfp4 checkpoint support for Qwen3.5 models.
  • The pull request includes accuracy test results for gsm8k showing a score of 0.960, indicating functionality with and without MTP.
  • A checklist for formatting, unit tests, documentation, and benchmarks is provided, but no specific completion status is indicated.
  • Instructions for the review process, including triggering CI tests and obtaining approvals, are outlined.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enables support for nvfp4 checkpoints by disabling quantization for specific modules, including the Linear Attention, Global Attention, visual model, and MTP layer when modelopt_fp4 quantization is detected. The changes are consistent across multiple model files and correctly implement the intended logic. I've added a few suggestions to improve code conciseness and maintainability by refactoring repeated conditional blocks into more compact expressions.

Comment thread python/sglang/srt/models/qwen3_5.py Outdated
Comment thread python/sglang/srt/models/qwen3_5.py Outdated
Comment thread python/sglang/srt/models/qwen3_vl.py Outdated
@hlu1
Copy link
Copy Markdown
Collaborator Author

hlu1 commented Feb 17, 2026

/tag-and-rerun-ci

@rainj-me
Copy link
Copy Markdown
Collaborator

@hlu1 , does the Qwen3.5 NVFP4 model has the accuracy issue with sglang? I'm asking because when I tested with the BF16 model weight the gsm8k return invalid result.

python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 100 --parallel 100 --port 28000

Accuracy: 0.690
Invalid: 0.010
Latency: 16.349 s
Output throughput: 1286.579 token/s

@hlu1
Copy link
Copy Markdown
Collaborator Author

hlu1 commented Feb 18, 2026

@hlu1 , does the Qwen3.5 NVFP4 model has the accuracy issue with sglang? I'm asking because when I tested with the BF16 model weight the gsm8k return invalid result.

You need to use the test that applies the chat_template:

python3 -m sglang.test.run_eval --port 30000 --eval-name gsm8k --num-examples 1319 --max-tokens 16000 --repeat 1 --num-threads 1319 --num-shots 5

@rainj-me
Copy link
Copy Markdown
Collaborator

@hlu1 , does the Qwen3.5 NVFP4 model has the accuracy issue with sglang? I'm asking because when I tested with the BF16 model weight the gsm8k return invalid result.

You need to use the test that applies the chat_template:

python3 -m sglang.test.run_eval --port 30000 --eval-name gsm8k --num-examples 1319 --max-tokens 16000 --repeat 1 --num-threads 1319 --num-shots 5

Thanks for updating, will try it next day.

@hlu1
Copy link
Copy Markdown
Collaborator Author

hlu1 commented Feb 18, 2026

/rerun-failed-ci

Copy link
Copy Markdown
Collaborator

@Edwardf0t1 Edwardf0t1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @hlu1

@hlu1
Copy link
Copy Markdown
Collaborator Author

hlu1 commented Feb 19, 2026

/rerun-failed-ci

@ispobock ispobock merged commit bba2fc4 into sgl-project:main Feb 19, 2026
303 of 340 checks passed
@aabbccddwasd
Copy link
Copy Markdown

Why hardcode quant_config=None instead of deriving it from the model configuration? Bypassing quantization logic at the SGLang level is counterproductive since NVFP4 checkpoints do not contain native BF16 weights. Consequently, this PR breaks compatibility with the majority of NVFP4 models available on Hugging Face.

aabbccddwasd added a commit to aabbccddwasd/sglang that referenced this pull request Feb 20, 2026
@hlu1
Copy link
Copy Markdown
Collaborator Author

hlu1 commented Feb 20, 2026

Why hardcode quant_config=None instead of deriving it from the model configuration? Bypassing quantization logic at the SGLang level is counterproductive since NVFP4 checkpoints do not contain native BF16 weights. Consequently, this PR breaks compatibility with the majority of NVFP4 models available on Hugging Face.

The checkpoint is released: https://huggingface.co/nvidia/Qwen3.5-397B-A17B-NVFP4
I'll see how I can do it without forcing quant_contig to be None.

ec-jt added a commit to ec-jt/sglang that referenced this pull request Feb 24, 2026
magicYang1573 pushed a commit to magicYang1573/sglang that referenced this pull request Mar 9, 2026
Wangzheee pushed a commit to Wangzheee/sglang that referenced this pull request Mar 21, 2026
Copilot AI added a commit to liusy58/sglang that referenced this pull request Apr 15, 2026
Edwardf0t1 added a commit to NVIDIA/Model-Optimizer that referenced this pull request Apr 16, 2026
…els (#1236)

### What does this PR do?

Type of change: Skills update

Add a debug loop guide for deploying unsupported models to the
deployment skill. When deploying models not in the validated support
matrix (e.g., newly quantized VLMs or models with new architectures like
Devstral/ministral3), the inference framework (vLLM, SGLang, TRT-LLM)
often fails during model init or weight loading.

This PR adds:
- `references/unsupported-models.md` — a 5-step iterative debug
workflow: **run → read error → diagnose → patch framework source →
re-run**
- A short pointer in `SKILL.md` under "Unsupported Models" (keeps
SKILL.md concise, matching the PTQ skill's pattern)

The guide covers five common error categories with real-world examples:
- **Weight key mismatches** (e.g.,
[vllm#39406](vllm-project/vllm#39406))
- **Quantized/unquantized layer confusion** (e.g.,
[sglang#18937](sgl-project/sglang#18937))
- **Missing architecture support** (e.g., `ministral3` not handled in
vLLM's `mistral3.py`)
- **Transformers version mismatches**
- **Kernel-level issues** (escalate to framework team)

Motivated by deploying a Devstral-Small-2-24B NVFP4 checkpoint on vLLM,
where vLLM's `mistral3.py` didn't handle `ministral3` as a text backbone
model type.

### Testing

Validated end-to-end: NVFP4 quantization of Devstral-Small-2-24B → vLLM
deployment on B100 GPUs with the debug loop (3 iterations to get the
server running).

### Before your PR is "*Ready for review*"

- Is this change backward compatible?: N/A (documentation only)
- If you copied code from any other sources or added a new PIP
dependency, did you follow guidance in `CONTRIBUTING.md`: N/A
- Did you write any new necessary tests?: N/A (skill documentation)
- Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?:
N/A

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Documentation**
* Added a deployment guide for unsupported models with an iterative "run
→ read error → diagnose → patch → re-run" troubleshooting workflow,
common failure categories, escalation criteria, and practical
remediation tips.
* Added post-quantization validation guidance and a lightweight script
to verify which layers are quantized vs excluded, plus recommendations
for addressing unexpected layers and MoE/VLM naming gaps.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->

---------

Signed-off-by: Zhiyu Cheng <zhiyuc@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants