Skip to content

[Bug] RuntimeError in flash_attn_with_kvcache: query and key must have the same dtype with FP8 models #18290

@zack041

Description

@zack041

Checklist

  • I searched related issues but found no solution.
  • The bug persists in the latest version.
  • Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.
  • If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.
  • Please use English. Otherwise, it will be closed.

Describe the bug

When running sglang.launch_server with a ModelOpt FP8 model (e.g., nvidia/Llama-3.1-8B-Instruct-FP8) and CUDA graph enabled, the server fails with:

RuntimeError: query and key must have the same dtype

This problem of mismatching qk data type would probably persist across multiple quantized models.

Reproduction

root@56f67d238243:/workspace/sglang# python3 -m sglang.launch_server \
    --model-path nvidia/Llama-3.1-8B-Instruct-FP8 \
    --quantization modelopt_fp8 \
    --port 30000

Environment

Python: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0]
CUDA available: True
GPU 0: NVIDIA H100 80GB HBM3
GPU 0 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.8, V12.8.93
CUDA Driver Version: 580.126.09
PyTorch: 2.9.1+cu128
sglang: 0.5.8
sgl_kernel: 0.3.21
flashinfer_python: 0.6.2
flashinfer_cubin: 0.6.2
flashinfer_jit_cache: Module Not Found
triton: 3.5.1
transformers: 4.57.1
torchao: 0.9.0
numpy: 2.1.2
aiohttp: 3.13.3
fastapi: 0.128.1
hf_transfer: 0.1.9
huggingface_hub: 0.36.1
interegular: 0.3.3
modelscope: 1.34.0
orjson: 3.11.7
outlines: 0.1.11
packaging: 25.0
psutil: 7.1.0
pydantic: 2.12.5
python-multipart: 0.0.22
pyzmq: 27.1.0
uvicorn: 0.40.0
uvloop: 0.22.1
vllm: Module Not Found
xgrammar: 0.1.27
openai: 2.6.1
tiktoken: 0.12.0
anthropic: 0.77.1
litellm: Module Not Found
decord2: 3.0.0
NVIDIA Topology: 
        GPU0    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    NIC8    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PXB     PIX     NODE    NODE    SYS     SYS     SYS     SYS     SYS     0-15    0               N/A
NIC0    PXB      X      PXB     NODE    NODE    SYS     SYS     SYS     SYS     SYS
NIC1    PIX     PXB      X      NODE    NODE    SYS     SYS     SYS     SYS     SYS
NIC2    NODE    NODE    NODE     X      PXB     SYS     SYS     SYS     SYS     SYS
NIC3    NODE    NODE    NODE    PXB      X      SYS     SYS     SYS     SYS     SYS
NIC4    SYS     SYS     SYS     SYS     SYS      X      PXB     NODE    NODE    NODE
NIC5    SYS     SYS     SYS     SYS     SYS     PXB      X      NODE    NODE    NODE
NIC6    SYS     SYS     SYS     SYS     SYS     NODE    NODE     X      PXB     NODE
NIC7    SYS     SYS     SYS     SYS     SYS     NODE    NODE    PXB      X      NODE
NIC8    SYS     SYS     SYS     SYS     SYS     NODE    NODE    NODE    NODE     X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8


ulimit soft: 4096

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions