$ python3 -m sglang.launch_server --trust-remote-code --model-path /home/jobuser/.cache/huggingface/hub/models--google--gemma-3n-E2B-it/snapshots/5e092ebca197cdcd8d8
b195040accf22693501bc/
[2025-07-16 05:13:15] Inferred chat template from model path: gemma-it
[2025-07-16 05:13:20] Attention backend not set. Use flashinfer backend by default.
[2025-07-16 05:13:20] Init torch distributed begin.
[2025-07-16 05:13:21] Init torch distributed ends. mem usage=0.00 GB
[2025-07-16 05:13:21] Load weight begin. avail mem=78.50 GB
Loading safetensors checkpoint shards: 0% Completed | 0/3 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 33% Completed | 1/3 [00:00<00:00, 2.22it/s]
Loading safetensors checkpoint shards: 67% Completed | 2/3 [00:01<00:00, 1.46it/s]
Loading safetensors checkpoint shards: 100% Completed | 3/3 [00:01<00:00, 1.51it/s]
Loading safetensors checkpoint shards: 100% Completed | 3/3 [00:01<00:00, 1.55it/s]
[2025-07-16 05:13:24] Load weight end. type=Gemma3nForConditionalGeneration, dtype=torch.bfloat16, avail mem=68.21 GB, mem usage=10.29 GB.
[2025-07-16 05:13:24] KV Cache is allocated. #tokens: 898930, K size: 25.72 GB, V size: 25.72 GB
[2025-07-16 05:13:24] Memory pool end. avail mem=16.19 GB
[2025-07-16 05:13:24] Capture cuda graph begin. This can take up to several minutes. avail mem=15.52 GB
[2025-07-16 05:13:24] Capture cuda graph bs [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160]
Capturing batches (bs=1 avail_mem=12.83 GB): 100%|██████████████| 23/23 [00:14<00:00, 1.56it/s]
[2025-07-16 05:13:39] Capture cuda graph end. Time elapsed: 14.85 s. mem usage=2.72 GB. avail mem=12.79 GB.
[2025-07-16 05:13:41] Scheduler hit an exception: Traceback (most recent call last):
File "/home/jobuser/sglang/python/sglang/srt/managers/scheduler.py", line 2899, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, pp_rank, dp_rank)
File "/home/jobuser/sglang/python/sglang/srt/managers/scheduler.py", line 398, in __init__
self.tp_worker.get_tokens_per_layer_info()
File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 106, in get_tokens_per_layer_info
return self.worker.get_tokens_per_layer_info()
File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker.py", line 187, in get_tokens_per_layer_info
self.model_runner.full_max_total_num_tokens,
AttributeError: 'ModelRunner' object has no attribute 'full_max_total_num_tokens'. Did you mean: 'max_total_num_tokens'?
[2025-07-16 05:13:41] Received sigquit from a child process. It usually means the child failed.
Killed
$ python3 -m sglang.check_env
Python: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0]
CUDA available: True
GPU 0,1: NVIDIA H100 80GB HBM3
GPU 0,1 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.77
CUDA Driver Version: 550.163.01
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post2
sgl_kernel: 0.2.5
flashinfer_python: 0.2.7.post1
triton: 3.3.1
transformers: 4.53.0
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.12.13
fastapi: 0.116.0
hf_transfer: 0.1.9
huggingface_hub: 0.33.2
interegular: 0.3.3
modelscope: 1.27.1
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.93.1
tiktoken: 0.9.0
anthropic: 0.57.1
litellm: 1.74.0.post1
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NODE NODE NODE PIX SYS SYS 0-63,128-191 0 N/A
GPU1 NV18 X SYS SYS SYS SYS PIX NODE 64-127,192-255 1 N/A
NIC0 NODE SYS X NODE NODE NODE SYS SYS
NIC1 NODE SYS NODE X PIX NODE SYS SYS
NIC2 NODE SYS NODE PIX X NODE SYS SYS
NIC3 PIX SYS NODE NODE NODE X SYS SYS
NIC4 SYS PIX SYS SYS SYS SYS X NODE
NIC5 SYS NODE SYS SYS SYS SYS NODE X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
ulimit soft: 10000000
Checklist
Describe the bug
Unable to launch sglang server using google/gemma-3n-E2B-it
Reproduction
Environment
sglang latest main branch: https://github.com/sgl-project/sglang/tree/497efe747d1f1cbcb6721f9d1721901e978956b4