Checklist
Describe the bug
Environment Description
I am using SGLang 0.5.9. My machine does not have RDMA devices; everything runs over TCP.
When I enable the L3 KV cache (i.e., enable hicache-storage-backend) and simultaneously run a benchmark program to stress test the service, the memory usage of the sglang::scheduler process continuously increases.
Based on my observations:
- Memory usage increases by approximately 10 GB per hour
- There appears to be no upper limit
- The process memory eventually exceeds 200 GB
- The process is then killed by the system (OOM)
The 200 GB process memory I mentioned refers to the RES value observed for that process in the top command. In addition, after I stop the benchmark program, the memory usage neither continues to increase nor decreases.
I tested both:
hicache-storage-backend=file
hicache-storage-backend=mooncake
The issue occurs in both cases, regardless of whether the backend is local disk or Mooncake.
Additionally, I tested all versions from 0.5.5 to 0.5.9, and the problem exists in all of them.
Versions
- SGLang version: 0.5.6~0.5.9
- Mooncake version: 0.3.7~0.3.9
Reproduction
SGLang Launch Command (File Backend)
export SGLANG_HICACHE_FILE_BACKEND_STORAGE_DIR=/root/kvcache
nohup python3 -m sglang.launch_server \
--model-path Qwen/Qwen3-4B \
--trust-remote-code \
--host 0.0.0.0 \
--port 30000 \
--mem-fraction-static 0.85 \
--enable-hierarchical-cache \
--hicache-size 100 \
--page-size 128 \
--hicache-io-backend kernel \
--hicache-mem-layout page_first \
--hicache-storage-backend file &
SGLang Launch Command (Mooncake Backend)
export MOONCAKE_TE_META_DATA_SERVER="etcd://etcdn1.mooncake-c1.dns.org:2379;etcd://etcdn2.mooncake-c1.dns.org:2379;etcd://etcdn3.mooncake-c1.dns.org:2379;etcd://etcdn4.mooncake-c1.dns.org:2379;etcd://etcdn5.mooncake-c1.dns.org:2379"
export MOONCAKE_MASTER="etcd://etcdn1.mooncake-c1.dns.org:2379;etcd://etcdn2.mooncake-c1.dns.org:2379;etcd://etcdn3.mooncake-c1.dns.org:2379;etcd://etcdn4.mooncake-c1.dns.org:2379;etcd://etcdn5.mooncake-c1.dns.org:2379"
export MOONCAKE_PROTOCOL="tcp"
export MOONCAKE_DEVICE=""
nohup python3 -m sglang.launch_server \
--model-path Qwen/Qwen3-4B \
--trust-remote-code \
--host 0.0.0.0 \
--port 30000 \
--mem-fraction-static 0.85 \
--enable-hierarchical-cache \
--hicache-size 100 \
--page-size 128 \
--hicache-io-backend kernel \
--hicache-mem-layout page_first \
--hicache-storage-backend mooncake \
--enable-metrics &
Benchmark Command
python3 bench_multiturn.py \
--host localhost \
--port 30000 \
--model-path Qwen/Qwen3-4B \
--dataset-path /root/sglang-src/data/ShareGPT_V3_unfiltered_cleaned_split.json \
--disable-random-sample \
--num-clients 24 \
--num-rounds 12 \
--max-parallel 4 \
--request-rate 8 \
--request-length 2048 \
--output-length 1 \
--ready-queue-policy random \
--disable-auto-run \
--enable-round-barrier
Environment
Python: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
GPU 0 Compute Capability: 8.6
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 575.64.03
PyTorch: 2.9.1+cu129
sglang: 0.5.9
sgl_kernel: 0.3.21
flashinfer_python: 0.6.3
flashinfer_cubin: 0.6.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.3.5
aiohttp: 3.13.3
fastapi: 0.124.2
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.33.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.3
pydantic: 2.12.5
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.38.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.75.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-31 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1048576
Checklist
Describe the bug
Environment Description
I am using SGLang 0.5.9. My machine does not have RDMA devices; everything runs over TCP.
When I enable the L3 KV cache (i.e., enable
hicache-storage-backend) and simultaneously run a benchmark program to stress test the service, the memory usage of thesglang::schedulerprocess continuously increases.Based on my observations:
The 200 GB process memory I mentioned refers to the RES value observed for that process in the top command. In addition, after I stop the benchmark program, the memory usage neither continues to increase nor decreases.
I tested both:
hicache-storage-backend=filehicache-storage-backend=mooncakeThe issue occurs in both cases, regardless of whether the backend is local disk or Mooncake.
Additionally, I tested all versions from 0.5.5 to 0.5.9, and the problem exists in all of them.
Versions
Reproduction
SGLang Launch Command (File Backend)
SGLang Launch Command (Mooncake Backend)
Benchmark Command
python3 bench_multiturn.py \ --host localhost \ --port 30000 \ --model-path Qwen/Qwen3-4B \ --dataset-path /root/sglang-src/data/ShareGPT_V3_unfiltered_cleaned_split.json \ --disable-random-sample \ --num-clients 24 \ --num-rounds 12 \ --max-parallel 4 \ --request-rate 8 \ --request-length 2048 \ --output-length 1 \ --ready-queue-policy random \ --disable-auto-run \ --enable-round-barrierEnvironment
Python: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]
CUDA available: True
GPU 0: NVIDIA GeForce RTX 3090
GPU 0 Compute Capability: 8.6
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 575.64.03
PyTorch: 2.9.1+cu129
sglang: 0.5.9
sgl_kernel: 0.3.21
flashinfer_python: 0.6.3
flashinfer_cubin: 0.6.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.3.5
aiohttp: 3.13.3
fastapi: 0.124.2
hf_transfer: 0.1.9
huggingface_hub: 0.36.0
interegular: 0.3.3
modelscope: 1.33.0
orjson: 3.11.5
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.3
pydantic: 2.12.5
python-multipart: 0.0.20
pyzmq: 27.1.0
uvicorn: 0.38.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.75.0
litellm: Module Not Found
decord2: 2.0.0
NVIDIA Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-31 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1048576