Skip to content

Add split tile size for Triton attention#10425

Merged
zhyncs merged 2 commits intomainfrom
ke/triton-split-tile-size
Sep 15, 2025
Merged

Add split tile size for Triton attention#10425
zhyncs merged 2 commits intomainfrom
ke/triton-split-tile-size

Conversation

@ispobock
Copy link
Copy Markdown
Collaborator

Motivation

#10278

Use fixed size split-KV strategy instead of fixed # split-KV strategy for better determinism on Triton attention.

Tested with script in #10417:

python3 -m sglang.launch_server --model-path Qwen/Qwen3-8B --attention-backend triton --disable-radix --disable-cuda-graph --triton-attention-split-tile-size 256

python3 -m sglang.test.test_deterministic
Total samples: 50, Unique samples: 1

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ispobock, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the KV split strategy within the Triton attention mechanism, moving from a static count of splits to a dynamic calculation based on a configurable tile size. The primary goal of this modification is to enhance the determinism of the Triton attention operations, particularly during decoding, by providing a more granular control over how KV caches are managed. A new command-line argument has been added to allow users to specify this tile size.

Highlights

  • Triton Attention KV Split Strategy: The pull request changes the Triton attention KV split strategy from using a fixed number of splits to a fixed tile size.
  • Improved Determinism: This new strategy aims to achieve better determinism in Triton attention, as demonstrated by testing results showing 50 total samples yielding only 1 unique sample.
  • New Configuration Option: A new server argument, triton_attention_split_tile_size, has been introduced to allow users to specify and control the size of these split KV tiles.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new mechanism for controlling the KV cache splitting strategy in Triton attention, aiming for better determinism. It adds a triton_attention_split_tile_size server argument. When specified, this tile size is used to calculate the number of KV splits based on sequence length, overriding the previous fixed-number-of-splits or heuristic-based strategies. The changes are implemented across triton_backend.py and server_args.py. The logic is sound, but I've pointed out a potential ZeroDivisionError if a user provides a non-positive value for the tile size and suggested adding a validation check.

Comment thread python/sglang/srt/layers/attention/triton_backend.py
@zhyncs zhyncs merged commit 60d7bed into main Sep 15, 2025
77 of 83 checks passed
@zhyncs zhyncs deleted the ke/triton-split-tile-size branch September 15, 2025 00:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants