Skip to content

[Bug] The memory capacity is unbalanced. Some GPUs may be occupied by other processes. #4233

@inkhare

Description

@inkhare

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

[2025-03-09 14:40:46 TP4] Capture cuda graph end. Time elapsed: 1334.38 s
[2025-03-09 14:40:46 TP3] Capture cuda graph end. Time elapsed: 1334.35 s
[2025-03-09 14:40:46 TP7] Capture cuda graph end. Time elapsed: 1334.37 s
[2025-03-09 14:40:46 TP5] Capture cuda graph end. Time elapsed: 1334.32 s
[2025-03-09 14:40:46 TP2] Capture cuda graph end. Time elapsed: 1334.32 s
[2025-03-09 14:40:46 TP6] Capture cuda graph end. Time elapsed: 1334.36 s
[2025-03-09 14:40:46 TP0] Capture cuda graph end. Time elapsed: 1334.36 s
[2025-03-09 14:40:47 TP4] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP4] Init torch distributed begin.
[2025-03-09 14:40:47 TP6] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP6] Init torch distributed begin.
[2025-03-09 14:40:47 TP2] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP2] Init torch distributed begin.
[2025-03-09 14:40:47 TP7] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP7] Init torch distributed begin.
[2025-03-09 14:40:47 TP1] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP5] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP1] Init torch distributed begin.
[2025-03-09 14:40:47 TP5] Init torch distributed begin.
[2025-03-09 14:40:47 TP0] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP0] Init torch distributed begin.
[2025-03-09 14:40:47 TP3] MLA optimization is turned on. Use triton backend.
[2025-03-09 14:40:47 TP3] Init torch distributed begin.
[2025-03-09 14:40:47 TP0] Load weight begin. avail mem=12.42 GB
[2025-03-09 14:40:47 TP1] Scheduler hit an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 1816, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 252, in init
self.draft_worker = EAGLEWorker(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/speculative/eagle_worker.py", line 47, in init
super().init(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/tp_worker.py", line 68, in init
self.model_runner = ModelRunner(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 187, in init
min_per_gpu_memory = self.init_torch_distributed()
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 280, in init_torch_distributed
raise ValueError(
ValueError: The memory capacity is unbalanced. Some GPUs may be occupied by other processes.

[2025-03-09 14:40:47 TP4] Load weight begin. avail mem=12.42 GB
[2025-03-09 14:40:47 TP2] Load weight begin. avail mem=12.42 GB
[2025-03-09 14:40:47 TP5] Scheduler hit an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 1816, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 252, in init
self.draft_worker = EAGLEWorker(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/speculative/eagle_worker.py", line 47, in init
super().init(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/tp_worker.py", line 68, in init
self.model_runner = ModelRunner(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 187, in init
min_per_gpu_memory = self.init_torch_distributed()
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 280, in init_torch_distributed
raise ValueError(
ValueError: The memory capacity is unbalanced. Some GPUs may be occupied by other processes.

[2025-03-09 14:40:47 TP7] Scheduler hit an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 1816, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 252, in init
self.draft_worker = EAGLEWorker(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/speculative/eagle_worker.py", line 47, in init
super().init(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/tp_worker.py", line 68, in init
self.model_runner = ModelRunner(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 187, in init
min_per_gpu_memory = self.init_torch_distributed()
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 280, in init_torch_distributed
raise ValueError(
ValueError: The memory capacity is unbalanced. Some GPUs may be occupied by other processes.

[2025-03-09 14:40:47 TP4] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
[2025-03-09 14:40:47 TP3] Scheduler hit an exception: Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 1816, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/scheduler.py", line 252, in init
self.draft_worker = EAGLEWorker(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/speculative/eagle_worker.py", line 47, in init
super().init(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/managers/tp_worker.py", line 68, in init
self.model_runner = ModelRunner(
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 187, in init
min_per_gpu_memory = self.init_torch_distributed()
File "/usr/local/lib/python3.10/dist-packages/sglang/srt/model_executor/model_runner.py", line 280, in init_torch_distributed
raise ValueError(
ValueError: The memory capacity is unbalanced. Some GPUs may be occupied by other processes.

Reproduction

python3 -m sglang.launch_server --model-path /models/deepseek --tp 16 --dist-init-addr $HEAD_IP:20000 --nnodes 2 --node-rank ${INDEX} --trust-remote-code --context-length 131072 --host 0.0.0.0 --port 8080 --enable-torch-compile --torch-compile-max-bs 16 --speculative-algo NEXTN --speculative-draft /data04/DeepSeek-R1-NextN --speculative-num-steps 2 --speculative-eagle-topk 4 --speculative-num-draft-tokens 4 --disable-radix

Environment

2025-03-09 14:45:29,534 - INFO - flashinfer.jit: Prebuilt kernels not found, using JIT backend
WARNING 03-09 14:45:32 cuda.py:23] You are using a deprecated pynvml package. Please install nvidia-ml-py instead, and make sure to uninstall pynvml. When both of them are installed, pynvml will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
WARNING:mistral_common.tokens.tokenizers.multimodal:Warning: Your installation of OpenCV appears to be broken: module 'cv2.dnn' has no attribute 'DictValue'.Please follow the instructions at opencv/opencv-python#884 to correct your environment. The import of cv2 has been skipped.
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:

  • 'fields' has been removed
    warnings.warn(message, UserWarning)
    Python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
    CUDA available: True
    GPU 0,1,2,3,4,5,6,7: NVIDIA H20
    GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 12.4, V12.4.131
    CUDA Driver Version: 535.161.08
    PyTorch: 2.5.1+cu124
    sglang: 0.4.3.post2
    sgl_kernel: 0.0.3.post6
    flashinfer: 0.2.1.post2
    triton: 3.1.0
    transformers: 4.48.2
    torchao: 0.8.0
    numpy: 1.26.4
    aiohttp: 3.9.3
    fastapi: 0.115.8
    hf_transfer: 0.1.9
    huggingface_hub: 0.28.1
    interegular: 0.3.3
    modelscope: 1.22.3
    orjson: 3.10.15
    packaging: 23.2
    psutil: 5.9.4
    pydantic: 2.10.6
    multipart: 0.0.20
    zmq: 25.1.2
    uvicorn: 0.34.0
    uvloop: 0.21.0
    vllm: 0.6.4.post1
    openai: 1.60.2
    anthropic: 0.45.2
    decord: 0.6.0
    NVIDIA Topology:
    GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 CPU Affinity NUMA Affinity GPU NUMA ID
    GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE SYS SYS NODE 0-47,96-143 0 N/A
    GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE NODE SYS SYS NODE 0-47,96-143 0 N/A
    GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PIX SYS SYS NODE 0-47,96-143 0 N/A
    GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE SYS SYS NODE 0-47,96-143 0 N/A
    GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS PIX NODE SYS 48-95,144-191 1 N/A
    GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS NODE NODE SYS 48-95,144-191 1 N/A
    GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS NODE PIX SYS 48-95,144-191 1 N/A
    GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS NODE NODE SYS 48-95,144-191 1 N/A
    NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE SYS SYS NODE
    NIC1 NODE NODE PIX NODE SYS SYS SYS SYS NODE X SYS SYS NODE
    NIC2 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS X NODE SYS
    NIC3 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS NODE X SYS
    NIC4 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_3
NIC2: mlx5_4
NIC3: mlx5_5
NIC4: mlx5_bond_0

ulimit soft: 1048576

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions