Skip to content

[Bug] MTP CUDA error: an illegal memory access was encountered #8336

@SweatLin

Description

@SweatLin

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

When I use a single node to enable MTP for long-term reasoning of the DeepSeek-R1-0528 model, a CUDA error: an illegal memory access was encountered appears

2025-07-25T05:09:16.157949381+08:00 stdout F [2025-07-25 05:09:16 TP0] Prefill batch. #new-seq: 1, #new-token: 8192, #cached-token: 10, #token: 281681, token usage: 0.49, #running-req: 42, #queue-req: 0, timestamp: 2025-07-25T05:09:16.157526
2025-07-25T05:09:16.591518957+08:00 stdout F [2025-07-25 05:09:16 TP0] Prefill batch. #new-seq: 1, #new-token: 1854, #cached-token: 0, #token: 289873, token usage: 0.51, #running-req: 42, #queue-req: 0, timestamp: 2025-07-25T05:09:16.591055
2025-07-25T05:09:17.214176275+08:00 stdout F [2025-07-25 05:09:17] INFO:     100.125.183.122:38746 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2025-07-25T05:09:17.233476663+08:00 stdout F [2025-07-25 05:09:17 TP0] Prefill batch. #new-seq: 1, #new-token: 947, #cached-token: 10, #token: 288496, token usage: 0.50, #running-req: 42, #queue-req: 0, timestamp: 2025-07-25T05:09:17.233054
2025-07-25T05:09:17.601851083+08:00 stdout F [2025-07-25 05:09:17] INFO:     100.125.183.122:36024 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2025-07-25T05:09:17.602139044+08:00 stdout F [2025-07-25 05:09:17] INFO:     214.2.7.3:32738 - "GET /health_generate HTTP/1.1" 200 OK
2025-07-25T05:09:17.616162831+08:00 stdout F [2025-07-25 05:09:17 TP0] Prefill batch. #new-seq: 1, #new-token: 8192, #cached-token: 398, #token: 285431, token usage: 0.50, #running-req: 42, #queue-req: 0, timestamp: 2025-07-25T05:09:17.615855
2025-07-25T05:09:17.648144098+08:00 stdout F [2025-07-25 05:09:17] INFO:     100.125.183.122:55128 - "POST /v1/chat/completions HTTP/1.1" 200 OK
2025-07-25T05:09:18.06309057+08:00 stdout F [2025-07-25 05:09:18 TP0] Prefill batch. #new-seq: 2, #new-token: 8192, #cached-token: 10, #token: 293623, token usage: 0.51, #running-req: 42, #queue-req: 0, timestamp: 2025-07-25T05:09:18.062542
2025-07-25T05:09:18.433515824+08:00 stdout F [2025-07-25 05:09:18 TP0] Prefill batch. #new-seq: 1, #new-token: 6787, #cached-token: 0, #token: 301815, token usage: 0.53, #running-req: 43, #queue-req: 0, timestamp: 2025-07-25T05:09:18.433076
2025-07-25T05:09:19.130796758+08:00 stdout F [2025-07-25 05:09:19 TP1] Scheduler hit an exception: Traceback (most recent call last):
2025-07-25T05:09:19.130807819+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2769, in run_scheduler_process
2025-07-25T05:09:19.13080961+08:00 stdout F     scheduler.event_loop_normal()
2025-07-25T05:09:19.130810923+08:00 stdout F   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-07-25T05:09:19.130812485+08:00 stdout F     return func(*args, **kwargs)
2025-07-25T05:09:19.130813908+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 772, in event_loop_normal
2025-07-25T05:09:19.130823773+08:00 stdout F     result = self.run_batch(batch)
2025-07-25T05:09:19.130824865+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1756, in run_batch
2025-07-25T05:09:19.130826899+08:00 stdout F     ) = self.draft_worker.forward_batch_speculative_generation(batch)
2025-07-25T05:09:19.13082839+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 325, in forward_batch_speculative_generation
2025-07-25T05:09:19.130829715+08:00 stdout F     self.verify(batch, spec_info)
2025-07-25T05:09:19.130830919+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 687, in verify
2025-07-25T05:09:19.130832016+08:00 stdout F     res: EagleVerifyOutput = spec_info.verify(
2025-07-25T05:09:19.130833181+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_utils.py", line 475, in verify
2025-07-25T05:09:19.130834354+08:00 stdout F     accept_index_cpu = accept_index.tolist()
2025-07-25T05:09:19.13083538+08:00 stdout F RuntimeError: CUDA error: an illegal memory access was encountered
2025-07-25T05:09:19.130836441+08:00 stdout F CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
2025-07-25T05:09:19.130837738+08:00 stdout F For debugging consider passing CUDA_LAUNCH_BLOCKING=1
2025-07-25T05:09:19.130839137+08:00 stdout F Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-07-25T07:53:08.21748708+08:00 stdout F [2025-07-25 07:53:08 TP2] Scheduler hit an exception: Traceback (most recent call last):
2025-07-25T07:53:08.217488644+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2769, in run_scheduler_process
2025-07-25T07:53:08.217491295+08:00 stdout F     scheduler.event_loop_normal()
2025-07-25T07:53:08.217492618+08:00 stdout F   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-07-25T07:53:08.217495189+08:00 stdout F     return func(*args, **kwargs)
2025-07-25T07:53:08.217496327+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 772, in event_loop_normal
2025-07-25T07:53:08.217497393+08:00 stdout F     result = self.run_batch(batch)
2025-07-25T07:53:08.217498394+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1756, in run_batch
2025-07-25T07:53:08.217499858+08:00 stdout F     ) = self.draft_worker.forward_batch_speculative_generation(batch)
2025-07-25T07:53:08.217501559+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 325, in forward_batch_speculative_generation
2025-07-25T07:53:08.21750242+08:00 stdout F     self.verify(batch, spec_info)
2025-07-25T07:53:08.217503355+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 687, in verify
2025-07-25T07:53:08.217504335+08:00 stdout F     res: EagleVerifyOutput = spec_info.verify(
2025-07-25T07:53:08.217505315+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_utils.py", line 475, in verify
2025-07-25T07:53:08.21750623+08:00 stdout F     accept_index_cpu = accept_index.tolist()
2025-07-25T07:53:08.217507096+08:00 stdout F RuntimeError: CUDA error: an illegal memory access was encountered
2025-07-25T07:53:08.21750808+08:00 stdout F CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
2025-07-25T07:53:08.217509063+08:00 stdout F For debugging consider passing CUDA_LAUNCH_BLOCKING=1
2025-07-25T07:53:08.217510194+08:00 stdout F Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
2025-07-25T07:53:08.21751109+08:00 stdout F 
2025-07-25T07:53:08.217511919+08:00 stdout F 
2025-07-25T07:53:08.217936061+08:00 stdout F [2025-07-25 07:53:08 TP0] Scheduler hit an exception: Traceback (most recent call last):
2025-07-25T07:53:08.217942571+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 2769, in run_scheduler_process
2025-07-25T07:53:08.217944507+08:00 stdout F     scheduler.event_loop_normal()
2025-07-25T07:53:08.217946103+08:00 stdout F   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
2025-07-25T07:53:08.217947743+08:00 stdout F     return func(*args, **kwargs)
2025-07-25T07:53:08.217949185+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 772, in event_loop_normal
2025-07-25T07:53:08.217950757+08:00 stdout F     result = self.run_batch(batch)
2025-07-25T07:53:08.217952394+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1756, in run_batch
2025-07-25T07:53:08.217954433+08:00 stdout F     ) = self.draft_worker.forward_batch_speculative_generation(batch)
2025-07-25T07:53:08.217956404+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 325, in forward_batch_speculative_generation
2025-07-25T07:53:08.217960743+08:00 stdout F     self.verify(batch, spec_info)
2025-07-25T07:53:08.217963483+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_worker.py", line 687, in verify
2025-07-25T07:53:08.217964967+08:00 stdout F     res: EagleVerifyOutput = spec_info.verify(
2025-07-25T07:53:08.217966601+08:00 stdout F   File "/sgl-workspace/sglang/python/sglang/srt/speculative/eagle_utils.py", line 475, in verify
2025-07-25T07:53:08.217968188+08:00 stdout F     accept_index_cpu = accept_index.tolist()
2025-07-25T07:53:08.217969862+08:00 stdout F RuntimeError: CUDA error: an illegal memory access was encountered
2025-07-25T07:53:08.217971299+08:00 stdout F CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
2025-07-25T07:53:08.217973047+08:00 stdout F For debugging consider passing CUDA_LAUNCH_BLOCKING=1
2025-07-25T07:53:08.217983407+08:00 stdout F Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Reproduction

The startup command I used

python3 -m sglang.launch_server --model-path ${MODEL_PATH} --served-model-name DeepSeek-R1 --host 0.0.0.0 --port 8000 --trust-remote-code --context-length 131072 --tp 8 --attention-backend fa3 --mem-fraction-static 0.85 --enable-metrics --reasoning-parser deepseek-r1 --speculative-algorithm EAGLE --speculative-num-steps 4 --speculative-eagle-topk 2 --speculative-num-draft-tokens 4 --cuda-graph-max-bs 64 --tool-call-parser deepseekv3 --chat-template /sgl-workspace/sglang/examples/chat_template/tool_chat_template_deepseekv3.jinja

Environment

root@sglang-one-node-0:/sgl-workspace/sglang# python3 -m sglang.check_env
Python: 3.10.12 (main, May 27 2025, 17:12:29) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H200
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.68
CUDA Driver Version: 550.54.15
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post2
sgl_kernel: 0.2.5
flashinfer_python: 0.2.7.post1
triton: 3.3.1
transformers: 4.53.0
torchao: 0.9.0+cu126
numpy: 2.2.6
aiohttp: 3.12.14
fastapi: 0.116.1
hf_transfer: 0.1.9
huggingface_hub: 0.33.4
interegular: 0.3.3
modelscope: 1.28.0
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.95.1
tiktoken: 0.9.0
anthropic: 0.57.1
litellm: 1.74.2
decord: 0.6.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    NV18    NV18    NV18    NV18    NV18    NV18    PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS     0-47,96-143     0               N/A
GPU1    NV18     X      NV18    NV18    NV18    NV18    NV18    NV18    SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     0-47,96-143     0               N/A
GPU2    NV18    NV18     X      NV18    NV18    NV18    NV18    NV18    SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     0-47,96-143     0               N/A
GPU3    NV18    NV18    NV18     X      NV18    NV18    NV18    NV18    SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     0-47,96-143     0               N/A
GPU4    NV18    NV18    NV18    NV18     X      NV18    NV18    NV18    SYS     SYS     SYS     SYS     PIX     SYS     SYS     SYS     48-95,144-191   1               N/A
GPU5    NV18    NV18    NV18    NV18    NV18     X      NV18    NV18    SYS     SYS     SYS     SYS     SYS     PIX     SYS     SYS     48-95,144-191   1               N/A
GPU6    NV18    NV18    NV18    NV18    NV18    NV18     X      NV18    SYS     SYS     SYS     SYS     SYS     SYS     PIX     SYS     48-95,144-191   1               N/A
GPU7    NV18    NV18    NV18    NV18    NV18    NV18    NV18     X      SYS     SYS     SYS     SYS     SYS     SYS     SYS     PIX     48-95,144-191   1               N/A
NIC0    PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     SYS     SYS
NIC1    SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS     SYS
NIC2    SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS     SYS
NIC3    SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS
NIC4    SYS     SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS
NIC5    SYS     SYS     SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS
NIC6    SYS     SYS     SYS     SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS
NIC7    SYS     SYS     SYS     SYS     SYS     SYS     SYS     PIX     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7


ulimit soft: 1048576

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions