[FlashAttn] Add fused triton kernel for normal_decode_set_metadata#20778
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the performance of the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces fused Triton kernels (_fused_metadata_kernel_general and _fused_metadata_kernel_ps1_no_swa) to the normal_decode_set_metadata function in flashattention_backend.py. This change replaces several sequential CUDA operations with optimized kernels, achieving a reported ~5.2x speedup. The normal_decode_set_metadata function now dispatches to either a general kernel or a specialized kernel for page_size=1 without SWA. A new test file, test_normal_decode_set_metadata.py, has been added to verify the correctness of these fused kernels across various page sizes, SWA configurations, batch sizes, sequence lengths, and edge cases. A review comment suggests refactoring the duplicated prefix sum logic found in both new Triton kernels into a shared helper function to improve code maintainability.
| # 1. Prefix sum (only one block does it) | ||
| if pid_b == 0 and pid_c == 0: | ||
| acc = 0 | ||
| for idx in range(B): | ||
| seq = tl.load(seq_lens + idx * seq_lens_stride_0) | ||
| val = (seq + seq_len_delta).to(tl.int32) | ||
| tl.store(cache_seqlens_int32 + idx * cache_seqlens_int32_stride_0, val) | ||
| tl.store(cu_seqlens_k + idx * cu_seqlens_k_stride_0, acc) | ||
| acc += val | ||
| tl.store(cu_seqlens_k + B * cu_seqlens_k_stride_0, acc) |
|
|
||
| # Kernel configuration | ||
| BLOCK_COLS = 128 | ||
| shift = (page_size).bit_length() - 1 if page_size > 1 else 0 |
There was a problem hiding this comment.
Should we add check for page_size, it must be power-of-two number.
| self._run_test(batch_size=1, max_seq_len=128, page_size=1, has_swa=False) | ||
| self._run_test(batch_size=1, max_seq_len=256, page_size=64, has_swa=False) | ||
|
|
||
| def test_max_seq_pages_zero(self): |
There was a problem hiding this comment.
The "zero" was a misnomer. We've renamed it to test_max_seq_pages_small to better reflect what it tests.
| @@ -0,0 +1,414 @@ | |||
| """ | |||
There was a problem hiding this comment.
Please add this test to ci workflow.
|
Added isolated kernel benchmarking result. |
|
/tag-and-rerun-ci |
…gl-project#20778) Co-authored-by: kinza99 <dh18324568312@163.com>
…gl-project#20778) Co-authored-by: kinza99 <dh18324568312@163.com>
…gl-project#20778) Co-authored-by: kinza99 <dh18324568312@163.com>
…gl-project#20778) Co-authored-by: kinza99 <dh18324568312@163.com>
…gl-project#20778) Co-authored-by: kinza99 <dh18324568312@163.com>
Motivation
This PR introduces a fused Triton kernel for the
normal_decode_set_metadatafunction, addressing the existingTODO: fuse these kernelsinflashattention_backend.py.sglang/python/sglang/srt/layers/attention/flashattention_backend.py
Lines 2665 to 2672 in 966ae87
Modifications
Introduced two fused Triton kernels in
flashattention_backend.pyto replace the original sequential operations innormal_decode_set_metadata:_fused_metadata_kernel_ps1_no_swa— specialized fast path for the common case (page_size=1, no SWA)_fused_metadata_kernel_general— general path supporting arbitrary power-of-two page sizes and optional Sliding Window Attention (SWA)Added unit tests in
python/sglang/test/attention/test_normal_decode_set_metadata.py.Accuracy Tests
Benchmarking and Profiling
Isolated kernel benchmark
We measured the latency of the isolated kernel over 1,000 iterations on NV-H200 using the following configuration:
bs 32,page_size 1,max_ctx 8192,max_pool 1024,seq_delta 0. The fused kernel achieves a 4.78x speedup compared to the baseline implementation.End-to-end benchmark
The following benchmarks were conducted using
sglang.bench_servingwithmeta-llama/Meta-Llama-3.1-8B-Instructon NV-H200. Bothrandom-input-lenandrandom-output-lenwere set toseqlen.Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci