Skip to content

Double vision prefill throughput by defaulting to optimal vision attention backend#8484

Merged
zhyncs merged 15 commits intosgl-project:mainfrom
AlienKevin:optimized-vision-attn
Aug 13, 2025
Merged

Double vision prefill throughput by defaulting to optimal vision attention backend#8484
zhyncs merged 15 commits intosgl-project:mainfrom
AlienKevin:optimized-vision-attn

Conversation

@AlienKevin
Copy link
Copy Markdown
Contributor

@AlienKevin AlienKevin commented Jul 29, 2025

Motivation

TLDR; Vision attention defaults to inefficient SDPA even on platforms that support more efficient Triton backend. By defaulting to most optimal supported attention backend, we can double vision perfill throughput on VLMs like Qwen 2.5 VL 7B on H200.
See #8179 (comment)

Benchmark & Profiling

Profiling Qwen 2.5 VL 7B before and after PR on GB200 shows 1.6x increase in throughput under ISL=1000, OSL=2, and concurrency=256.
Before PR:

"Input Tokens/s": 7407.142234200068,
"Avg TTFT(s)": 20.554337582262974,
"Avg TPOT(s)": 14.229874295137847

After PR:

"Input Tokens/s": 12073.272607543215,
"Avg TTFT(s)": 12.099749442036076,
"Avg TPOT(s)": 8.979351270686001

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @AlienKevin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented a change to significantly improve the vision prefill throughput by ensuring that the most optimal attention backend is used by default. Previously, vision attention would often default to the less efficient SDPA even when a more performant Triton backend was available. My changes now detect CUDA availability and prioritize triton_attn, which has been shown to double vision prefill throughput on VLMs like Qwen 2.5 VL 7B on H200. This optimization leads to a substantial performance gain, as evidenced by benchmarks showing a 1.6x increase in input tokens per second.

Highlights

  • Optimized Vision Attention Backend Selection: The system now intelligently defaults to triton_attn on CUDA-enabled platforms for vision attention, falling back to sdpa otherwise, significantly boosting performance.
  • Improved Qwen 2.5 VL Model Integration: The Qwen 2.5 VL model's attention implementation initialization has been updated to leverage this new dynamic backend selection, removing its previous hardcoded sdpa preference.
  • Significant Throughput Gains: Benchmarks demonstrate a 1.6x increase in input tokens/s, leading to faster Time To First Token (TTFT) and Time Per Output Token (TPOT) for vision models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves vision prefill throughput by defaulting to the Triton attention backend on CUDA-enabled platforms, which is more efficient than the previous SDPA default. The changes are logical and well-supported by the provided benchmarks. The implementation correctly modifies VisionAttention to select the optimal backend and updates Qwen2_5_VisionBlock to allow this new default behavior. I have one minor style suggestion to improve code quality.

Comment thread python/sglang/srt/models/qwen2_5_vl.py Outdated
JustinTong0323 and others added 2 commits July 28, 2025 17:43
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
@JustinTong0323 JustinTong0323 requested a review from kushanam as a code owner July 29, 2025 00:59
@JustinTong0323
Copy link
Copy Markdown
Collaborator

Thanks for the contribution! May I ask why we choose triton_attn instead of fa3?

@AlienKevin
Copy link
Copy Markdown
Contributor Author

Thanks for the contribution! May I ask why we choose triton_attn instead of fa3?

I found them to have similar performance but triton_attn is supported on more devices, like GB200.

@AlienKevin
Copy link
Copy Markdown
Contributor Author

@JustinTong0323 Just checking in, anything else blocking this PR to be merged? Thanks.

@JustinTong0323
Copy link
Copy Markdown
Collaborator

@JustinTong0323 Just checking in, anything else blocking this PR to be merged? Thanks.

ping me in slack if CI passed, or need to rerun, thanks

Copy link
Copy Markdown
Collaborator

@JustinTong0323 JustinTong0323 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Kevin!

@JustinTong0323 JustinTong0323 added the ready-to-merge The PR is ready to merge after the CI is green. label Aug 12, 2025
@zhyncs zhyncs merged commit 3b3b3ba into sgl-project:main Aug 13, 2025
179 of 192 checks passed
narutolhy pushed a commit to narutolhy/sglang that referenced this pull request Aug 17, 2025
…ntion backend (sgl-project#8484)

Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
…ntion backend (sgl-project#8484)

Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-to-merge The PR is ready to merge after the CI is green.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants