Checklist
Describe the bug
When inference with "cognitivecomputations/DeepSeek-R1-AWQ", a bug occurs in the "load_weights" function.
The older version (the version launched about 1-2 month ago) works fine but the latest version failed. I compare the code between the old version and the latest version, I find out that the "load_weights" function in "class DeepseekV2ForCausalLM(nn.Module):" changes, which may cause the bug.
I use docker to install both versions。
Reproduction
Inference with "cognitivecomputations/DeepSeek-R1-AWQ".
python3 -m sglang.launch_server --model cognitivecomputations/DeepSeek-R1-AWQ --tp 8 --trust-remote-code
Environment
Docker installation.
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.247.01
PyTorch: 2.6.0+cu124
sglang: 0.4.6.post2
sgl_kernel: 0.1.1
flashinfer_python: 0.2.5+cu124torch2.6
triton: 3.2.0
transformers: 4.51.1
torchao: 0.10.0
numpy: 2.2.5
aiohttp: 3.11.18
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.30.2
interegular: 0.3.3
modelscope: 1.25.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.4
python-multipart: 0.0.20
pyzmq: 26.4.0
uvicorn: 0.34.2
uvloop: 0.21.0
vllm: 0.8.4
xgrammar: 0.1.18
openai: 1.76.2
tiktoken: 0.9.0
anthropic: 0.50.0
litellm: 1.67.5
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 64-127,192-255 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 64-127,192-255 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 64-127,192-255 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 64-127,192-255 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
The functional old version:
Python: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.247.01
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.1.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.63.2
tiktoken: 0.9.0
anthropic: 0.45.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 64-127,192-255 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 64-127,192-255 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 64-127,192-255 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 64-127,192-255 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1048576
Checklist
Describe the bug
When inference with "cognitivecomputations/DeepSeek-R1-AWQ", a bug occurs in the "load_weights" function.
The older version (the version launched about 1-2 month ago) works fine but the latest version failed. I compare the code between the old version and the latest version, I find out that the "load_weights" function in "class DeepseekV2ForCausalLM(nn.Module):" changes, which may cause the bug.
I use docker to install both versions。
Reproduction
Inference with "cognitivecomputations/DeepSeek-R1-AWQ".
python3 -m sglang.launch_server --model cognitivecomputations/DeepSeek-R1-AWQ --tp 8 --trust-remote-code
Environment
Docker installation.
Python: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.247.01
PyTorch: 2.6.0+cu124
sglang: 0.4.6.post2
sgl_kernel: 0.1.1
flashinfer_python: 0.2.5+cu124torch2.6
triton: 3.2.0
transformers: 4.51.1
torchao: 0.10.0
numpy: 2.2.5
aiohttp: 3.11.18
fastapi: 0.115.12
hf_transfer: 0.1.9
huggingface_hub: 0.30.2
interegular: 0.3.3
modelscope: 1.25.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.4
python-multipart: 0.0.20
pyzmq: 26.4.0
uvicorn: 0.34.2
uvloop: 0.21.0
vllm: 0.8.4
xgrammar: 0.1.18
openai: 1.76.2
tiktoken: 0.9.0
anthropic: 0.50.0
litellm: 1.67.5
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 64-127,192-255 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 64-127,192-255 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 64-127,192-255 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 64-127,192-255 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
The functional old version:
Python: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.247.01
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.1.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.63.2
tiktoken: 0.9.0
anthropic: 0.45.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 0-63,128-191 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 64-127,192-255 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 64-127,192-255 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 64-127,192-255 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X 64-127,192-255 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1048576