Checklist
Describe the bug
enabling piecewise cuda graph would degrade the accuracy of gpt-oss-120b. The accuracy result with PCG varies from 0.745 ~ 0.80, which is much lower than the baseline (without PCG).
Reproduction
python3 -m sglang.launch_server --model-path /shared/public/elr-models/openai/gpt-oss-120b-new/ --trust-remote-code --tp 2 --reasoning-parser gpt-oss
python benchmark/gsm8k/bench_sglang.py --data-path /shared/public/data/gsm8k/test.jsonl
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:29<00:00, 6.89it/s]
Accuracy: 0.855
Invalid: 0.005
Latency: 30.892 s
Output throughput: 2283.403 token/s
python3 -m sglang.launch_server --model-path /shared/public/elr-models/openai/gpt-oss-120b-new/ --trust-remote-code --tp 2 --reasoning-parser gpt-oss --enable-piecewise-cuda-graph
python benchmark/gsm8k/bench_sglang.py --data-path /shared/public/data/gsm8k/test.jsonl
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [00:24<00:00, 8.26it/s]
Accuracy: 0.745
Invalid: 0.010
Latency: 24.217 s
Output throughput: 2821.277 token/s
Environment
Python: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA H200
GPU 0,1,2,3 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.8, V12.8.93
CUDA Driver Version: 550.163.01
PyTorch: 2.9.1+cu128
sglang: 0.0.0.dev9175+g2c2c4e446
sgl_kernel: 0.3.21
flashinfer_python: 0.6.1
flashinfer_cubin: 0.6.1
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.13.3
fastapi: 0.128.0
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.34.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 26.0
psutil: 7.2.1
pydantic: 2.12.5
python-multipart: 0.0.22
pyzmq: 27.1.0
uvicorn: 0.40.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.76.0
litellm: Module Not Found
decord2: 3.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE 64-127,192-255 1 N/A
GPU1 NV18 X NV18 NV18 SYS SYS SYS SYS NODE NODE 64-127,192-255 1 N/A
GPU2 NV18 NV18 X NV18 SYS SYS SYS SYS NODE PIX 64-127,192-255 1 N/A
GPU3 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE 64-127,192-255 1 N/A
NIC0 SYS SYS SYS SYS X NODE NODE NODE SYS SYS
NIC1 SYS SYS SYS SYS NODE X PIX NODE SYS SYS
NIC2 SYS SYS SYS SYS NODE PIX X NODE SYS SYS
NIC3 SYS SYS SYS SYS NODE NODE NODE X SYS SYS
NIC4 PIX NODE NODE NODE SYS SYS SYS SYS X NODE
NIC5 NODE NODE PIX NODE SYS SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
ulimit soft: 10000000
Checklist
Describe the bug
enabling piecewise cuda graph would degrade the accuracy of gpt-oss-120b. The accuracy result with PCG varies from 0.745 ~ 0.80, which is much lower than the baseline (without PCG).
Reproduction
python3 -m sglang.launch_server --model-path /shared/public/elr-models/openai/gpt-oss-120b-new/ --trust-remote-code --tp 2 --reasoning-parser gpt-osspython3 -m sglang.launch_server --model-path /shared/public/elr-models/openai/gpt-oss-120b-new/ --trust-remote-code --tp 2 --reasoning-parser gpt-oss --enable-piecewise-cuda-graphEnvironment
Python: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA H200
GPU 0,1,2,3 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.8, V12.8.93
CUDA Driver Version: 550.163.01
PyTorch: 2.9.1+cu128
sglang: 0.0.0.dev9175+g2c2c4e446
sgl_kernel: 0.3.21
flashinfer_python: 0.6.1
flashinfer_cubin: 0.6.1
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.13.3
fastapi: 0.128.0
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.34.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 26.0
psutil: 7.2.1
pydantic: 2.12.5
python-multipart: 0.0.22
pyzmq: 27.1.0
uvicorn: 0.40.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.76.0
litellm: Module Not Found
decord2: 3.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE 64-127,192-255 1 N/A
GPU1 NV18 X NV18 NV18 SYS SYS SYS SYS NODE NODE 64-127,192-255 1 N/A
GPU2 NV18 NV18 X NV18 SYS SYS SYS SYS NODE PIX 64-127,192-255 1 N/A
GPU3 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE 64-127,192-255 1 N/A
NIC0 SYS SYS SYS SYS X NODE NODE NODE SYS SYS
NIC1 SYS SYS SYS SYS NODE X PIX NODE SYS SYS
NIC2 SYS SYS SYS SYS NODE PIX X NODE SYS SYS
NIC3 SYS SYS SYS SYS NODE NODE NODE X SYS SYS
NIC4 PIX NODE NODE NODE SYS SYS SYS SYS X NODE
NIC5 NODE NODE PIX NODE SYS SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
ulimit soft: 10000000