[NVIDIA] Bugfix NVFP4 DGX Spark and RTX50#38423
Conversation
There was a problem hiding this comment.
Code Review
This pull request updates the CUTLASS revision to v4.4.2 and upgrades FlashInfer to version 0.6.7 across the Dockerfiles and requirement files. It also introduces runtime checks to verify that NVFP4 quantization kernels are compiled for the current GPU's SM version (SM100 or SM120) before use, preventing invalid backend selection or runtime failures. I have no feedback to provide.
Signed-off-by: johnnynunez <johnnynuca14@gmail.com>
Signed-off-by: johnnynunez <johnnynuca14@gmail.com>
|
Could a maintainer please add the |
|
Hi @johnnynunez, the pre-commit checks have failed. Please run: uv pip install pre-commit>=4.5.1
pre-commit install
pre-commit run --all-filesThen, commit the changes and push to your branch. For future commits, Tip Is
|
CUTLASS v4.4.2 added ArchTag to DispatchPolicy in sm90_gemm_tma_warpspecialized_cooperative.hpp to distinguish SM90 from SM120 kernel paths. Machete's custom MacheteCollectiveMma defines its own DispatchPolicy but was missing this field, causing all 18 Machete template instantiations to fail with "has no member ArchTag". Also reformats nvfp4_scaled_mm_entry.cu to satisfy pre-commit linter. Signed-off-by: johnnynunez <johnnynuca14@gmail.com>
|
Getting consistent Illegal Instruction crashes with this PR. Building Flashinfer from main with FLASHINFER_CUDA_ARCH_LIST=12.1a |
|
ready to merge! @mgoin Now it is working perfectly and B200 accuracy tests passed for NVFP4 Nemotron Super NVFP4 - DGX Spark export VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
vllm serve nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 \
--kv-cache-dtype fp8 \
--trust-remote-code \
--gpu-memory-utilization 0.7 \
--max-model-len 262144 \
--max-num-seqs 10 \
--enable-prefix-caching \
--host 0.0.0.0 \
--port 8000 \
--enable-auto-tool-choice \
--load-format fastsafetensors \
--tool-call-parser qwen3_coder \
--reasoning-parser nemotron_v3 \
--mamba_ssm_cache_dtype float32Results (Benchmark & Stress Test) --> Auto-detected HF model: nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 (served as: nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4)
llama-benchy (0.3.5)
Date: 2026-03-30 01:35:34
Benchmarking model: nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 at http://localhost:8000/v1
Concurrency levels: [1]
Loading text from cache: /home/johnny/.cache/llama-benchy/cc6a0b5782734ee3b9069aa3b64cc62c.txt
Total tokens available in text corpus: 143827
Warming up...
Warmup (User only) complete. Delta: 16 tokens (Server: 38, Local: 22)
Warmup (System+Empty) complete. Delta: 16 tokens (Server: 38, Local: 22)
Running coherence test...
Coherence test PASSED.
Measuring latency using mode: api...
Average latency (api): 1.63 ms
Running test: pp=2048, tg=32, depth=0, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=4096, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=8192, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=16384, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=32768, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=65535, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=100000, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Running test: pp=2048, tg=32, depth=200000, concurrency=1
Run 1/3 (batch size 1)...
Run 2/3 (batch size 1)...
Run 3/3 (batch size 1)...
Printing results in MD format:
| model | test | t/s | peak t/s | ttfr (ms) | est_ppt (ms) | e2e_ttft (ms) |
|:-----------------------------------------------|-----------------:|-----------------:|-------------:|-------------------:|-------------------:|-------------------:|
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 | 1722.48 ± 394.11 | | 1269.76 ± 345.98 | 1268.14 ± 345.98 | 1269.84 ± 345.98 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 | 12.76 ± 0.01 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d4096 | 1948.06 ± 80.28 | | 3161.05 ± 134.07 | 3159.43 ± 134.07 | 3161.13 ± 134.05 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d4096 | 12.75 ± 0.01 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d8192 | 1964.84 ± 4.14 | | 5213.28 ± 10.99 | 5211.65 ± 10.99 | 5213.35 ± 10.97 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d8192 | 12.71 ± 0.01 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d16384 | 1934.31 ± 5.53 | | 9530.67 ± 27.20 | 9529.04 ± 27.20 | 9530.74 ± 27.22 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d16384 | 12.64 ± 0.01 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d32768 | 1857.07 ± 14.17 | | 18750.32 ± 143.56 | 18748.69 ± 143.56 | 18750.39 ± 143.57 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d32768 | 12.64 ± 0.02 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d65535 | 1759.29 ± 5.89 | | 38416.91 ± 128.78 | 38415.28 ± 128.78 | 38416.98 ± 128.78 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d65535 | 12.64 ± 0.04 | 13.00 ± 0.00 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d100000 | 1656.44 ± 4.33 | | 61608.98 ± 160.90 | 61607.35 ± 160.90 | 61609.06 ± 160.91 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d100000 | 12.69 ± 0.08 | 13.67 ± 0.47 | | | |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | pp2048 @ d200000 | 1397.08 ± 7.47 | | 144626.89 ± 771.10 | 144625.26 ± 771.10 | 144626.94 ± 771.11 |
| nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 | tg32 @ d200000 | 12.59 ± 0.12 | 14.00 ± 0.00 | | | |
llama-benchy (0.3.5)
date: 2026-03-30 01:35:34 | latency mode: api
(APIServer pid=33932) INFO 03-30 01:50:49 [loggers.py:259] Engine 000: Avg prompt throughput: 20205.7 tokens/s, Avg generation throughput: 3.2 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0%
(APIServer pid=33932) INFO 03-30 01:50:59 [loggers.py:259] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.0%, Prefix cache hit rate: 0.0% |
| # Currently FI requires bfloat16 routing bias. | ||
| # https://github.com/flashinfer-ai/flashinfer/issues/2909 | ||
| if e_score_correction_bias is not None: | ||
| e_score_correction_bias = e_score_correction_bias.to(torch.bfloat16) |
There was a problem hiding this comment.
@pavanimajety do you know if this is right? I thought we fixed this issue for trtllm MoE across the board
There was a problem hiding this comment.
WIth new FI version, there are various CI failures with accuracy collapse. I rooted down the cause to these.
For reproducing the issue, can run gsm8k on nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-FP8 and nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4 on current main and flashinfer 0.6.7
| # Currently FI requires bfloat16 routing bias. | ||
| # https://github.com/flashinfer-ai/flashinfer/issues/2909 | ||
| if e_score_correction_bias is not None: | ||
| e_score_correction_bias = e_score_correction_bias.to(torch.bfloat16) | ||
|
|
Signed-off-by: Johnny <johnnynuca14@gmail.com>
yewentao256
left a comment
There was a problem hiding this comment.
#38188
There is a flashiner version update PR here, not sure if we want to land it separately
|
I see some eval tests still failing. Do we have clues on those? |
I retried both of the last two |
|
@mgoin Is this expected? |
|
@mgoin Thanks! |
Signed-off-by: johnnynunez <johnnynuca14@gmail.com> Signed-off-by: Johnny <johnnynuca14@gmail.com> Signed-off-by: neweyes <328719365@qq.com>
…es on SM12x (#2913) ### Summary - Add missing `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` compile flag to all CUTLASS fused MoE JIT modules (SM100/SM103/SM120) and `-DCUTLASS_ENABLE_GDC_FOR_SM90=1` to SM90 modules - Sync nv_internal `grid_dependency_control.h` with upstream CUTLASS to support SM100/SM103/SM110/SM120/SM121 GDC - Add `-DCUTLASS_ENABLE_GDC_FOR_SM90=1` to FP8 blockscale GEMM SM90 module ### Problem Random `cudaErrorIllegalInstruction` crashes on DGX Spark (SM121) and RTX 50-series (SM120) when running NVFP4 MoE models (e.g., Nemotron, Qwen3.5-122B) under load. The crashes are intermittent and worsen with longer context lengths and higher concurrency. **Root cause:** PR #2780 fixed the missing GDC compile flags for GEMM modules (`flashinfer/jit/gemm/core.py`), but the **CUTLASS fused MoE modules** in `flashinfer/jit/fused_moe.py` and the **FP8 blockscale GEMM module** were not fixed. This is the exact same class of bug as #2708. Without `-DCUTLASS_ENABLE_GDC_FOR_SM100=1`, CUTLASS's `grid_dependency_control.h` compiles `wait_on_dependent_grids()` and `launch_dependent_grids()` as **empty no-ops**: ```cpp CUTLASS_DEVICE void wait_on_dependent_grids() { #if (defined(CUTLASS_GDC_ENABLED)) // ← not defined without the flag asm volatile("griddepcontrol.wait;"); #endif } ``` Meanwhile, the host-side code still sets `programmaticStreamSerializationAllowed = true` (PDL enabled) via `device_support_pdl()` which returns `True` for all `major >= 9`, including SM12x. This means: 1. **Host enables PDL** → CUDA runtime overlaps consecutive kernels 2. **Device GDC barriers are no-ops** → No synchronization between overlapping kernels 3. **Race condition** → Dependent kernel reads stale global memory → corruption → `cudaErrorIllegalInstruction` The crash is random because it depends on exact kernel scheduling timing, which varies per request. ### Fix **`flashinfer/jit/fused_moe.py`** — Added GDC flags to all CUTLASS fused MoE modules: | Module | Flag | Architectures Covered | |---|---|---| | `gen_cutlass_fused_moe_sm120_module()` | `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` | SM120, SM121 | | `gen_cutlass_fused_moe_sm103_module()` | `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` | SM103, SM120, SM121 | | `gen_cutlass_fused_moe_sm100_module()` | `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` | SM100, SM110, SM120, SM121 | | `gen_cutlass_fused_moe_sm90_module()` | `-DCUTLASS_ENABLE_GDC_FOR_SM90=1` | SM90 | | `gen_trtllm_gen_fused_moe_sm100_module()` | `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` | SM100+, SM120, SM121 | **`flashinfer/jit/gemm/fp8_blockscale.py`** — Added `-DCUTLASS_ENABLE_GDC_FOR_SM90=1` to `gen_fp8_blockscale_gemm_sm90_module()`. **`csrc/nv_internal/.../grid_dependency_control.h`** — Synced with upstream CUTLASS (`3rdparty/cutlass/include/cutlass/arch/grid_dependency_control.h`) to add SM100+ GDC support. Previously only handled SM90, so any nv_internal TensorRT-LLM code compiled for SM12x would have GDC barriers silently compiled as no-ops. ### Why `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` covers SM12x CUTLASS uses a single flag for the entire Blackwell family. From `grid_dependency_control.h`: ```cpp #if(CUDA_BARRIER_ENABLED && defined(CUTLASS_ENABLE_GDC_FOR_SM100) && defined(__CUDA_ARCH__) && \ ((__CUDA_ARCH__ == 1000 && ...) || // SM100 (__CUDA_ARCH__ == 1030 && ...) || // SM103 (__CUDA_ARCH__ == 1100 && ...) || // SM110 (__CUDA_ARCH__ == 1200 && ...) || // SM120 (RTX 50-series) (__CUDA_ARCH__ == 1210 && ...))) // SM121 (DGX Spark) #define CUTLASS_GDC_ENABLED ``` ### Why SM90 GDC flag was NOT added to SM100+ modules PR #2716 attempted to add both `-DCUTLASS_ENABLE_GDC_FOR_SM90=1` and `-DCUTLASS_ENABLE_GDC_FOR_SM100=1` to all modules. It broke AOT builds because `sm120_gemm_tma_warpspecialized_cooperative_asymmetric_dma.hpp` checks `CUTLASS_ENABLE_GDC_FOR_SM90` and calls `scheduler.is_last_tile()` — a method not present on the SM120 scheduler. PR #2780 corrected this by using only the SM100 flag for SM100+ modules. This PR follows the same approach. ### Related - #2708 — Original issue: missing GDC flags cause PDL race condition - #2716 — First fix attempt (reverted — broke AOT) - #2780 — Corrected fix for GEMM modules only - [vllm-project/vllm#38423](vllm-project/vllm#38423) — NVFP4 bugfix on DGX Spark - [NVIDIA/cutlass#3121](NVIDIA/cutlass#3121) — K=64 block-scaled GEMM tiles (separate issue) ### Test plan - [x] Clear JIT cache: `rm -rf ~/.cache/flashinfer/` - [x] Run NVFP4 MoE model on SM121 (DGX Spark) with 128K context under load — verify no `cudaErrorIllegalInstruction` - [x] Run NVFP4 MoE model on SM120 (RTX 50-series) with concurrent requests — verify no NaN/garbage output - [x] Verify `CUDA_LAUNCH_BLOCKING=1` workaround is no longer needed - [x] AOT build with `FLASHINFER_CUDA_ARCH_LIST="12.1a"` completes without errors - [x] SM90 (Hopper) fused MoE tests pass: `pytest tests/moe/` - [x] SM100 GEMM tests still pass (no regression from existing GDC flags) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Expanded GPU kernel compilation support: enabled additional optimizations for NVIDIA SM100 and SM90 GPUs, activating dependency-control optimizations where available. * Updated JIT/GEMM build configs to include these architecture-specific compile options, improving performance and compatibility on supported hardware. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Signed-off-by: johnnynunez <johnnynuca14@gmail.com> Signed-off-by: Johnny <johnnynuca14@gmail.com> Signed-off-by: Rishi Puri <riship@nvidia.com>
Signed-off-by: johnnynunez <johnnynuca14@gmail.com> Signed-off-by: Johnny <johnnynuca14@gmail.com>
Summary
Fix
cudaErrorIllegalInstructionwhen running NVFP4 models (e.g.nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B-NVFP4) on SM12x GPUs (RTX 50 series SM120, DGX Spark SM121).Root causes
CUTLASS v4.2.2 lacks SM12x NVFP4 tile constraints — The bundled CUTLASS was missing SM120f family-level compilation support for NVFP4/MX Grouped GEMM and SM121-specific tile configurations (DGX Spark). This caused
IllegalInstructionduring decode when small-M tile variants were selected. Related upstream: NVIDIA/cutlass#3038.FlashInfer 0.6.6 bundles CUTLASS 4.2.1 — The FlashInfer CUTLASS MoE backend failed on SM12x with
Failed to initialize cutlass TMA WS grouped gemmdue to the same missing tile constraints. Fixed upstream in flashinfer-ai/flashinfer#2798.cutlass_scaled_mm_supports_fp4()reported false availability — Only checked CUDA runtime version (>= 12080), not whether the SM-specific kernel was actually compiled. On a build with onlyENABLE_NVFP4_SM100, it incorrectly reported CUTLASS as available for SM12x, then failed at dispatch.Quantization kernels had no SM runtime guard — The
scaled_fp4_quant,silu_and_mul_nvfp4_quant, and expert quant entry points dispatched to_sm1xxakernels if any SM1xx was compiled, with no runtime check. If only SM100 SASS existed, CUDA would JIT-compile SM100 PTX for SM120 (different major arch), producing illegal instructions asynchronously — surfacing later atsynchronize()as an opaque CUDA error.FlashInfer CUTLASS backend bypassed quant kernel checks —
select_nvfp4_linear_backend()selected FlashInfer CUTLASS solely onhas_device_capability(100), without verifying the vLLM quantization kernels (used by all non-Marlin backends) were compiled for the current SM.Changes
CMakeLists.txtdocker/Dockerfiledocker/Dockerfile.nightly_torchdocker/versions.jsonFLASHINFER_VERSION:0.6.6→0.6.7nvfp4_scaled_mm_entry.cucutlass_scaled_mm_supports_fp4()now checks compile-timeENABLE_NVFP4_SM100/ENABLE_NVFP4_SM120guards per SM range instead of a blanket>= 100checknvfp4_quant_entry.cunvfp4_quant_sm_supported()runtime guard to all four quant entry points (scaled_fp4_quant,scaled_fp4_experts_quant,silu_and_mul_nvfp4_quant,silu_and_mul_scaled_fp4_experts_quant)nvfp4_utils.pyselect_nvfp4_linear_backend()gates FlashInfer CUTLASS oncutlass_fp4_supported()+ adds validation assert for all FlashInfer backendsWhat is NOT changed
Marlin remains a valid fallback on SM12x. Marlin FP4 uses weight-only dequantization to BF16 — it does not use native FP4 tensor core instructions and works correctly on all Blackwell architectures including DGX Spark. Benchmarks confirm Marlin is stable on SM121 (~558 tok/s, on par with vLLM CUTLASS at ~562 tok/s). The Marlin path (
apply_fp4_marlin_linear) bypasses the vLLM quant kernels entirely, so the SM guards innvfp4_quant_entry.cudo not affect it.Behavior on SM12x after this PR
ENABLE_NVFP4_SM120+ CUTLASS v4.4.2IllegalInstructionENABLE_NVFP4_SM120IllegalInstruction(SM100 PTX JIT to SM120)Failed to initialize cutlass TMA WS grouped gemm(CUTLASS 4.2.1 in FlashInfer 0.6.6)Follow-up: FlashInfer 0.6.8
flashinfer-ai/flashinfer#2738 (merged March 28, 2026) adds native NVFP4 and MXFP4 group GEMM support for SM120/SM121 (RTX 50 / DGX Spark) directly in FlashInfer. This will land in FlashInfer 0.6.8. Once released,
FLASHINFER_VERSIONshould be bumped indocker/Dockerfile,docker/Dockerfile.nightly_torch, anddocker/versions.jsonto unlock FlashInfer's own SM12x NVFP4/MXFP4 kernels (including GDC unguarding and PDL group GEMM fixes). TODO comments have been added to both Dockerfiles tracking this.Test plan
CUDA_ARCHS="12.0a;12.1a"on DGX Spark (SM121), verify NVFP4 model serves with vLLM CUTLASS backend (VLLM_NVFP4_GEMM_BACKEND=cutlass --moe-backend=cutlass)CUDA_ARCHS="12.0a;12.1a", verify Marlin fallback still works (VLLM_NVFP4_GEMM_BACKEND=marlin --moe-backend=marlin)CUDA_ARCHS="10.0a"only, verify Marlin fallback on SM12x (noIllegalInstruction)tests/models/quantization/test_nvfp4.pyon SM120+DockerfileandDockerfile.nightly_torch