Checklist
Describe the bug
TLNR
- Multi-modal cache hit fails when
SGLANG_USE_CUDA_IPC_TRANSPORT=1 is enabled alongside SGLANG_ENABLE_MM_SPLITTING=1.
- Two requests comes successively: request A with image
a ; request B with image a, b.
- Using
SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=0, cache of image a can be hit for request B.
- Using
SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=1, cache of image a cannot be hit for request B.
More progress by my investigation
def hash_feature(f):
if isinstance(f, list):
if isinstance(f[0], torch.Tensor):
return tensor_hash(f)
return data_hash(tuple(flatten_nested_list(f)))
elif isinstance(f, np.ndarray):
arr = np.ascontiguousarray(f)
arr_bytes = arr.tobytes()
return data_hash(arr_bytes)
elif isinstance(f, torch.Tensor):
return tensor_hash([f])
elif isinstance(f, CudaIpcTensorTransportProxy):
reconstruct_t = f.reconstruct_on_target_device(torch.cuda.current_device())
return tensor_hash([reconstruct_t])
return data_hash(f)
- Using
SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=0
- The argument
f here is a torch tensor, the feature of image a in request B, so the cache can be hit for request B.
- Using
SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=1
- Here argument
f here is not a torch tensor but a sglang.srt.utils.cuda_ipc_transport_utils.CudaIpcTensorTransportProxy object. When reconstructed via reconstruct_on_target_device(), reconstruct_t is a torch tensor containing both images a and b concatenated, so the cache must be missed for request B.
- My Proposed solution PR19915 extracts and unpacks the
CudaIpcTensorTransportProxy object before creating expanded_mm_items, then restores it afterwards. So the f into function hash_feature() can be per-image. This ensures proper per-image hashing while preserving the CUDA IPC transport optimization.
Reproduction
- The function I use to start sglang server is shown as below.
- As mentioned in description, we send two requests and search the
#new-token: XXX, #cached-token YYY information in the log.
def start_server():
env = os.environ.copy()
env['SGLANG_USE_CUDA_IPC_TRANSPORT'] = '1'
env['SGLANG_ENABLE_MM_SPLITTING'] = '1'
env['SGLANG_VLM_CACHE_SIZE_MB'] = '8192'
env['SGLANG_MM_ITEM_MEM_POOL_RECYCLE_INTERVAL_SEC'] = '60'
env['FLASHINFER_WORKSPACE_SIZE'] = '=1073741824'
cmd = [
"python", "-m", "sglang.launch_server",
"--disable-cuda-graph",
"--disable-overlap-schedule",
"--enable-deterministic-inference",
"--trust-remote-code",
"--attention-backend", "fa3",
"--chunked-prefill-size", "163840",
"--cuda-graph-max-bs", "64",
"--host", HOST,
"--log-level", "debug",
"--max-prefill-tokens", "65536",
"--max-running-requests", "1024",
"--mem-fraction-static", "0.5",
"--mm-attention-backend", "fa3",
"--model-path", MODEL_PATH,
"--nnodes", "1",
"--node-rank", "0",
"--page-size", "32",
"--port", str(PORT),
"--tp", "1",
"--watchdog-timeout", "86400",
]
print(">>> Start command: ", " ".join(cmd))
proc = subprocess.Popen(cmd, env=env)
return proc
The output I get by reproducing the issue:
- SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=0
...
# request A
[2026-03-04 18:10:03] Prefill batch, #new-seq: 1, #new-token: 4128, #cached-token: 0, token usage: 0.02, #running-req: 0, #queue-req: 0, input throughput (token/s): 0.00, cuda graph: False
...
# request B
[2026-03-04 18:10:07] Prefill batch, #new-seq: 1, #new-token: 4128, #cached-token: 4096, token usage: 0.05, #running-req: 0, #queue-req: 0, input throughput (token/s): 25.25, cuda graph: False
...
# Request A again
[2026-03-04 18:10:09] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 4096, token usage: 0.02, #running-req: 0, #queue-req: 0, input throughput (token/s): 2453.14, cuda graph: False
...
# request B again
[2026-03-04 18:10:11] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 8192, token usage: 0.05, #running-req: 0, #queue-req: 0, input throughput (token/s): 12.96, cuda graph: False
...
- SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=1
...
# request A
[2026-03-04 18:16:50] Prefill batch, #new-seq: 1, #new-token: 4128, #cached-token: 0, token usage: 0.03, #running-req: 0, #queue-req: 0, input throughput (token/s): 0.00, cuda graph: False
...
# request B
[2026-03-04 18:16:53] Prefill batch, #new-seq: 1, #new-token: 8224, #cached-token: 0, token usage: 0.06, #running-req: 0, #queue-req: 0, input throughput (token/s): 35.14, cuda graph: False
...
# request A again
[2026-03-04 18:16:54] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 4096, token usage: 0.03, #running-req: 0, #queue-req: 0, input throughput (token/s): 7291.74, cuda graph: False
...
# request B again
[2026-03-04 18:16:56] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 8192, token usage: 0.06, #running-req: 0, #queue-req: 0, input throughput (token/s): 24.64, cuda graph: False
...
- SGLANG_ENABLE_MM_SPLITTING=1 and SGLANG_USE_CUDA_IPC_TRANSPORT=1 after fixing of PR19915
...
# request A
[2026-03-05 04:22:58] Prefill batch, #new-seq: 1, #new-token: 4128, #cached-token: 0, token usage: 0.03, #running-req: 0, #queue-req: 0, input throughput (token/s): 0.00, cuda graph: False
...
# request B
[2026-03-05 04:23:00] Prefill batch, #new-seq: 1, #new-token: 4128, #cached-token: 4096, token usage: 0.06, #running-req: 0, #queue-req: 0, input throughput (token/s): 1937.28, cuda graph: False
...
# request A again
[2026-03-05 04:23:01] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 4096, token usage: 0.03, #running-req: 0, #queue-req: 0, input throughput (token/s): 3669.26, cuda graph: False
...
# request B again
[2026-03-05 04:23:09] Prefill batch, #new-seq: 1, #new-token: 32, #cached-token: 8192, token usage: 0.06, #running-req: 0, #queue-req: 0, input throughput (token/s): 3.81, cuda graph: False
Environment
Python: 3.10.12 (main, Jan 26 2026, 14:55:28) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 PCIe
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.9, V12.9.86
CUDA Driver Version: 590.48.01
PyTorch: 2.9.1+cu128
sglang: 0.0.0
sgl_kernel: 0.3.21
flashinfer_python: 0.6.3
flashinfer_cubin: 0.6.3
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.13.3
fastapi: 0.133.1
hf_transfer: 0.1.9
huggingface_hub: 0.36.2
interegular: 0.3.3
modelscope: 1.34.0
orjson: 3.11.7
outlines: 0.1.11
packaging: 26.0
psutil: 7.2.2
pydantic: 2.12.5
python-multipart: 0.0.22
pyzmq: 27.1.0
uvicorn: 0.41.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.84.0
litellm: Module Not Found
decord2: 3.0.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS SYS SYS SYS SYS SYS SYS 48-63,176-191 3 N/A
GPU1 SYS X SYS SYS SYS SYS SYS SYS 32-47,160-175 2 N/A
GPU2 SYS SYS X SYS SYS SYS SYS SYS 16-31,144-159 1 N/A
GPU3 SYS SYS SYS X SYS SYS SYS SYS 0-15,128-143 0 N/A
GPU4 SYS SYS SYS SYS X SYS SYS SYS 112-127,240-255 7 N/A
GPU5 SYS SYS SYS SYS SYS X SYS SYS 96-111,224-239 6 N/A
GPU6 SYS SYS SYS SYS SYS SYS X SYS 80-95,208-223 5 N/A
GPU7 SYS SYS SYS SYS SYS SYS SYS X 64-79,192-207 4 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 1048576
- We would appreciate feedback from the SGLang maintainers on the merit of this fix. If there is positive reception, we are eager for further discussion and imrpovement.
Checklist
Describe the bug
Related issue11828, PR11785, and we have a PR19915 for fixing this.
TLNR
SGLANG_USE_CUDA_IPC_TRANSPORT=1is enabled alongsideSGLANG_ENABLE_MM_SPLITTING=1.a; request B with imagea,b.SGLANG_ENABLE_MM_SPLITTING=1andSGLANG_USE_CUDA_IPC_TRANSPORT=0, cache of imageacan be hit for request B.SGLANG_ENABLE_MM_SPLITTING=1andSGLANG_USE_CUDA_IPC_TRANSPORT=1, cache of imageacannot be hit for request B.More progress by my investigation
get_new_expanded_mm_items()andhash_feature()when request B comes:SGLANG_ENABLE_MM_SPLITTING=1andSGLANG_USE_CUDA_IPC_TRANSPORT=0fhere is atorch tensor, the feature of imageain request B, so the cache can be hit for request B.SGLANG_ENABLE_MM_SPLITTING=1andSGLANG_USE_CUDA_IPC_TRANSPORT=1fhere is not atorch tensorbut asglang.srt.utils.cuda_ipc_transport_utils.CudaIpcTensorTransportProxyobject. When reconstructed viareconstruct_on_target_device(),reconstruct_tis atorch tensorcontaining both imagesaandbconcatenated, so the cache must be missed for request B.CudaIpcTensorTransportProxyobject before creatingexpanded_mm_items, then restores it afterwards. So thefinto functionhash_feature()can be per-image. This ensures proper per-image hashing while preserving the CUDA IPC transport optimization.Reproduction
#new-token: XXX, #cached-token YYYinformation in the log.The output I get by reproducing the issue:
Environment