Skip to content

[Bug] DeepSeek FP4 Launching fail #12059

@JustinTong0323

Description

@JustinTong0323

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

Broke by this PR: #11795

SGLANG_USE_CUTLASS_BACKEND_FOR_FP4_GEMM=1 TRTLLM_ENABLE_PDL=1 python3 -m sglang.launch_server --model-path nvidia/DeepSeek-V3-0324-FP4 --trust-remote-code --quantization modelopt_fp4 --tp 8  --speculative-algorithm=EAGLE  --port 40020 --kv-cache-dtype fp8_e4m3 --enable-beta-spec
[2025-10-24 06:22:56 TP2] ModelOptModelLoader: Loading base model...
[2025-10-24 06:22:56 TP4] Using ModelOptModelLoader due to ModelOpt quantization config.
[2025-10-24 06:22:56 TP4] ModelOptModelLoader: Loading base model...
[2025-10-24 06:22:56 TP3] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP5] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP1] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP2] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP0] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP4] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP7] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP6] Model is already quantized, loading directly...
[2025-10-24 06:22:56 TP3] Detected nvfp4 checkpoint. Please note that the format is experimental and subject to change.
[2025-10-24 06:22:56 TP3] Overriding DeepseekV3ForCausalLMNextN quant config for modelopt_fp4 Deepseek model.
[2025-10-24 06:22:56 TP3] Scheduler hit an exception: Traceback (most recent call last):
  File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 2747, in run_scheduler_process
    scheduler = Scheduler(
  File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 323, in __init__
    self.launch_draft_worker(
  File "/root/sglang/python/sglang/srt/managers/scheduler.py", line 566, in launch_draft_worker
    self.draft_worker = WorkerClass(
  File "/root/sglang/python/sglang/srt/speculative/eagle_worker_v2.py", line 522, in __init__
    self._draft_worker = EagleDraftWorker(
  File "/root/sglang/python/sglang/srt/speculative/eagle_worker_v2.py", line 104, in __init__
    self.draft_worker = TpModelWorker(
  File "/root/sglang/python/sglang/srt/managers/tp_worker.py", line 235, in __init__
    self._model_runner = ModelRunner(
  File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 317, in __init__
    self.initialize(min_per_gpu_memory)
  File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 389, in initialize
    self.load_model()
  File "/root/sglang/python/sglang/srt/model_executor/model_runner.py", line 884, in load_model
    self.model = get_model(
  File "/root/sglang/python/sglang/srt/model_loader/__init__.py", line 28, in get_model
    return loader.load_model(
  File "/root/sglang/python/sglang/srt/model_loader/loader.py", line 1960, in load_model
    return super().load_model(
  File "/root/sglang/python/sglang/srt/model_loader/loader.py", line 590, in load_model
    model = _initialize_model(
  File "/root/sglang/python/sglang/srt/model_loader/loader.py", line 262, in _initialize_model
    return model_class(**kwargs)
  File "/root/sglang/python/sglang/srt/models/deepseek_nextn.py", line 163, in __init__
    self.model = DeepseekModelNextN(
  File "/root/sglang/python/sglang/srt/models/deepseek_nextn.py", line 88, in __init__
    self.decoder = DeepseekV2DecoderLayer(
  File "/root/sglang/python/sglang/srt/models/deepseek_v2.py", line 2543, in __init__
    self.mlp = DeepseekV2MoE(
  File "/root/sglang/python/sglang/srt/models/deepseek_v2.py", line 580, in __init__
    self.experts = get_moe_impl_class(quant_config)(
  File "/root/sglang/python/sglang/srt/layers/moe/fused_moe_triton/layer.py", line 228, in __init__
    self.quant_method.create_moe_runner(self, self.moe_runner_config)
  File "/root/sglang/python/sglang/srt/layers/quantization/unquant.py", line 235, in create_moe_runner
    self.runner = MoeRunner(backend, moe_runner_config)
  File "/root/sglang/python/sglang/srt/layers/moe/moe_runner/runner.py", line 40, in __init__
    raise NotImplementedError(f"Unsupported runner backend: {runner_backend}")
NotImplementedError: Unsupported runner backend: MoeRunnerBackend.FLASHINFER_TRTLLM

[2025-10-24 06:22:56] Received sigquit from a child process. It usually means the child failed.
[1]    917465 killed     SGLANG_USE_CUTLASS_BACKEND_FOR_FP4_GEMM=1 TRTLLM_ENABLE_PDL=1 python3 -m

Reproduction

As above

Environment

(sglang) ➜  ~ python3 -m sglang.check_env
/root/.python/sglang/lib/python3.10/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
Python: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA B200
GPU 0,1,2,3,4,5,6,7 Compute Capability: 10.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 580.65.06
PyTorch: 2.8.0+cu128
sglang: 0.5.4
sgl_kernel: 0.3.16.post3
flashinfer_python: 0.4.1
triton: 3.4.0
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.12.15
fastapi: 0.118.0
hf_transfer: 0.1.9
huggingface_hub: 0.35.3
interegular: 0.3.3
modelscope: 1.30.0
orjson: 3.11.3
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.0
pydantic: 2.11.9
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.37.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.25
openai: 1.99.1
tiktoken: 0.11.0
anthropic: 0.69.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    NV18    NV18    NV18    NV18    NV18    NV18    PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS     0-55,112-167    0               N/A
GPU1    NV18     X      NV18    NV18    NV18    NV18    NV18    NV18    PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS     0-55,112-167    0               N/A
GPU2    NV18    NV18     X      NV18    NV18    NV18    NV18    NV18    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     0-55,112-167    0               N/A
GPU3    NV18    NV18    NV18     X      NV18    NV18    NV18    NV18    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     0-55,112-167    0               N/A
GPU4    NV18    NV18    NV18    NV18     X      NV18    NV18    NV18    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    56-111,168-223  1               N/A
GPU5    NV18    NV18    NV18    NV18    NV18     X      NV18    NV18    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    56-111,168-223  1               N/A
GPU6    NV18    NV18    NV18    NV18    NV18    NV18     X      NV18    SYS     SYS     SYS     SYS     NODE    NODE    PIX     PIX     56-111,168-223  1               N/A
GPU7    NV18    NV18    NV18    NV18    NV18    NV18    NV18     X      SYS     SYS     SYS     SYS     NODE    NODE    PIX     PIX     56-111,168-223  1               N/A
NIC0    PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS      X      PIX     NODE    NODE    SYS     SYS     SYS     SYS
NIC1    PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS     PIX      X      NODE    NODE    SYS     SYS     SYS     SYS
NIC2    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     NODE    NODE     X      PIX     SYS     SYS     SYS     SYS
NIC3    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     NODE    NODE    PIX      X      SYS     SYS     SYS     SYS
NIC4    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS      X      PIX     NODE    NODE
NIC5    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    SYS     SYS     SYS     SYS     PIX      X      NODE    NODE
NIC6    SYS     SYS     SYS     SYS     NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     NODE    NODE     X      PIX
NIC7    SYS     SYS     SYS     SYS     NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     NODE    NODE    PIX      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: rocep145s0
  NIC1: rocep146s0
  NIC2: rocep152s0
  NIC3: rocep153s0
  NIC4: rocep198s0
  NIC5: rocep199s0
  NIC6: rocep205s0
  NIC7: rocep206s0


Hypervisor vendor: KVM
ulimit soft: 1048576

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions