Skip to content

[Bug] radix-cache does not work for audio mm LLM #8366

@byjiang1996

Description

@byjiang1996

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

Tried in both gemma3n-it and phi4-mm. Both results in server crash

[2025-07-25 22:09:56] Prefill batch. #new-seq: 1, #new-token: 471, #cached-token: 271, full token usage: 0.00, swa token usage: 0.00, #running-req: 0, #queue-req: 0, 
[2025-07-25 22:09:56] Number of tokens in multimodal embedding does not match those in the input text. Got 376 tokens in the text but 120 tokens from multimodal embeddings.
[2025-07-25 22:09:56] TpModelWorkerClient hit an exception: Traceback (most recent call last):
  File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 140, in forward_thread_func
    self.forward_thread_func_()
  File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 175, in forward_thread_func_
    self.worker.forward_batch_generation(
  File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker.py", line 228, in forward_batch_generation
    logits_output, can_run_cuda_graph = self.model_runner.forward(
  File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 1553, in forward
    output = self._forward_raw(
  File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 1598, in _forward_raw
    ret = self.forward_extend(
  File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 1498, in forward_extend
    return self.model.forward(
  File "/home/jobuser/sglang/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/jobuser/sglang/python/sglang/srt/models/gemma3n_mm.py", line 432, in forward
    hidden_states = general_mm_embed_routine(
  File "/home/jobuser/sglang/python/sglang/srt/managers/mm_utils.py", line 529, in general_mm_embed_routine
    inputs_embeds = embed_mm_inputs(
  File "/home/jobuser/sglang/python/sglang/srt/managers/mm_utils.py", line 450, in embed_mm_inputs
    embedding, mask = get_embedding_and_mask(
  File "/home/jobuser/sglang/python/sglang/srt/managers/mm_utils.py", line 374, in get_embedding_and_mask
    embedding = _adjust_embedding_length(embedding, special_multimodal_mask, logger)
  File "/home/jobuser/sglang/python/sglang/srt/managers/mm_utils.py", line 324, in _adjust_embedding_length
    raise RuntimeError(
RuntimeError: Insufficient multimodal embedding length: num_mm_tokens_in_input_ids=376 vs num_mm_tokens_in_embedding=120. This is an internal error

[2025-07-25 22:09:56] Received sigquit from a child process. It usually means the child failed.
[2025-07-25 22:09:56] Dumping requests before crash. self.crash_dump_folder=None

Reproduction

python3 -m sglang.launch_server --trust-remote-code --model-path microsoft/Phi-4-multimodal-instruct
curl_command = f"""
curl -s http://localhost:{30000}/v1/chat/completions \\
  -H "Content-Type: application/json" \\
  -d '{{
    "model": "nonee",
    "messages": [
      {{
        "role": "user", 
        "content": [
          {{
            "type": "image_url", 
            "image_url": {{
              "url": "/home/jobuser/sglang/australia.jpg"
            }}
          }},
          {{
            "type": "audio_url", 
            "audio_url": {{
              "url": "/home/jobuser/.cache/huggingface/hub/models--microsoft--Phi-4-multimodal-instruct/snapshots/33e62acdd07cd7d6635badd529aa0a3467bb9c6a/examples/what_is_the_traffic_sign_in_the_image.wav"
            }}
          }}
        ]
      }}
    ],
    "temperature": 0,
    "max_tokens": 1000
  }}'
"""

response = subprocess.check_output(curl_command, shell=True).decode()
print(response)


curl_command = f"""
curl -s http://localhost:{30000}/v1/chat/completions \\
  -H "Content-Type: application/json" \\
  -d '{{
    "model": "nonee",
    "messages": [
      {{
        "role": "user", 
        "content": [
          {{
            "type": "image_url", 
            "image_url": {{
              "url": "/home/jobuser/sglang/australia.jpg"
            }}
          }},
          {{
            "type": "audio_url", 
            "audio_url": {{
              "url": "/home/jobuser/.cache/huggingface/hub/models--microsoft--Phi-4-multimodal-instruct/snapshots/33e62acdd07cd7d6635badd529aa0a3467bb9c6a/examples/what_is_shown_in_this_image.wav"
            }}
          }}
        ]
      }},
      {{
        "role": "assistant", 
        "content": [
          {{
            "type": "text", 
            "text": "The image depicts a street scene with a prominent red stop sign in the foreground. The background showcases a building with traditional Chinese architecture, characterized by its red roof and ornate decorations. There are also several statues of lions, which are common in Chinese culture, positioned in front of the building. The street is lined with various shops and businesses, and there is a car passing by."
          }}
        ]
      }},
      {{
        "role": "user", 
        "content": [
          {{
            "type": "audio_url", 
            "audio_url": {{
              "url": "/home/jobuser/.cache/huggingface/hub/models--microsoft--Phi-4-multimodal-instruct/snapshots/33e62acdd07cd7d6635badd529aa0a3467bb9c6a/examples/what_is_the_traffic_sign_in_the_image.wav"
            }}
          }}
        ]
      }}
    ],
    "temperature": 0,
    "max_tokens": 1000
  }}'
"""

response = subprocess.check_output(curl_command, shell=True).decode()
print(response)

Environment

(venv) jobuser [ ~/sglang ]$ python3 -m sglang.check_env
Python: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0]
CUDA available: True
GPU 0,1: NVIDIA H100 80GB HBM3
GPU 0,1 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.77
CUDA Driver Version: 550.163.01
PyTorch: 2.7.1+cu126
sglang: 0.4.9.post3
sgl_kernel: 0.2.7
flashinfer_python: 0.2.9rc1
triton: 3.3.1
transformers: 4.53.2
torchao: 0.9.0
numpy: 2.2.6
aiohttp: 3.12.13
fastapi: 0.116.0
hf_transfer: 0.1.9
huggingface_hub: 0.33.2
interegular: 0.3.3
modelscope: 1.27.1
orjson: 3.10.18
outlines: 0.1.11
packaging: 25.0
psutil: 7.0.0
pydantic: 2.11.7
python-multipart: 0.0.20
pyzmq: 27.0.0
uvicorn: 0.35.0
uvloop: 0.21.0
vllm: Module Not Found
xgrammar: 0.1.21
openai: 1.93.1
tiktoken: 0.9.0
anthropic: 0.57.1
litellm: 1.74.0.post1
decord: 0.6.0
NVIDIA Topology: 
        GPU0    GPU1    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    SYS     SYS     SYS     SYS     PIX     NODE    64-127,192-255  1               N/A
GPU1    NV18     X      SYS     SYS     SYS     SYS     NODE    NODE    64-127,192-255  1               N/A
NIC0    SYS     SYS      X      NODE    NODE    NODE    SYS     SYS
NIC1    SYS     SYS     NODE     X      PIX     NODE    SYS     SYS
NIC2    SYS     SYS     NODE    PIX      X      NODE    SYS     SYS
NIC3    SYS     SYS     NODE    NODE    NODE     X      SYS     SYS
NIC4    PIX     NODE    SYS     SYS     SYS     SYS      X      NODE
NIC5    NODE    NODE    SYS     SYS     SYS     SYS     NODE     X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5


ulimit soft: 10000000

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions