Double vision prefill throughput by defaulting to optimal vision attention backend#8484
Double vision prefill throughput by defaulting to optimal vision attention backend#8484zhyncs merged 15 commits intosgl-project:mainfrom AlienKevin:optimized-vision-attn
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @AlienKevin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've implemented a change to significantly improve the vision prefill throughput by ensuring that the most optimal attention backend is used by default. Previously, vision attention would often default to the less efficient SDPA even when a more performant Triton backend was available. My changes now detect CUDA availability and prioritize triton_attn, which has been shown to double vision prefill throughput on VLMs like Qwen 2.5 VL 7B on H200. This optimization leads to a substantial performance gain, as evidenced by benchmarks showing a 1.6x increase in input tokens per second.
Highlights
- Optimized Vision Attention Backend Selection: The system now intelligently defaults to
triton_attnon CUDA-enabled platforms for vision attention, falling back tosdpaotherwise, significantly boosting performance. - Improved Qwen 2.5 VL Model Integration: The Qwen 2.5 VL model's attention implementation initialization has been updated to leverage this new dynamic backend selection, removing its previous hardcoded
sdpapreference. - Significant Throughput Gains: Benchmarks demonstrate a 1.6x increase in input tokens/s, leading to faster Time To First Token (TTFT) and Time Per Output Token (TPOT) for vision models.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request improves vision prefill throughput by defaulting to the Triton attention backend on CUDA-enabled platforms, which is more efficient than the previous SDPA default. The changes are logical and well-supported by the provided benchmarks. The implementation correctly modifies VisionAttention to select the optimal backend and updates Qwen2_5_VisionBlock to allow this new default behavior. I have one minor style suggestion to improve code quality.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Thanks for the contribution! May I ask why we choose triton_attn instead of fa3? |
I found them to have similar performance but triton_attn is supported on more devices, like GB200. |
|
@JustinTong0323 Just checking in, anything else blocking this PR to be merged? Thanks. |
ping me in slack if CI passed, or need to rerun, thanks |
…on to reduce false negatives
…ntion backend (sgl-project#8484) Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
…ntion backend (sgl-project#8484) Co-authored-by: Xiang (Kevin) Li <lik@nvidia.com>
Motivation
TLDR; Vision attention defaults to inefficient SDPA even on platforms that support more efficient Triton backend. By defaulting to most optimal supported attention backend, we can double vision perfill throughput on VLMs like Qwen 2.5 VL 7B on H200.
See #8179 (comment)
Benchmark & Profiling
Profiling Qwen 2.5 VL 7B before and after PR on GB200 shows 1.6x increase in throughput under ISL=1000, OSL=2, and concurrency=256.
Before PR:
After PR:
Checklist