Add split tile size for Triton attention#10425
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @ispobock, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request refactors the KV split strategy within the Triton attention mechanism, moving from a static count of splits to a dynamic calculation based on a configurable tile size. The primary goal of this modification is to enhance the determinism of the Triton attention operations, particularly during decoding, by providing a more granular control over how KV caches are managed. A new command-line argument has been added to allow users to specify this tile size.
Highlights
- Triton Attention KV Split Strategy: The pull request changes the Triton attention KV split strategy from using a fixed number of splits to a fixed tile size.
- Improved Determinism: This new strategy aims to achieve better determinism in Triton attention, as demonstrated by testing results showing 50 total samples yielding only 1 unique sample.
- New Configuration Option: A new server argument,
triton_attention_split_tile_size, has been introduced to allow users to specify and control the size of these split KV tiles.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a new mechanism for controlling the KV cache splitting strategy in Triton attention, aiming for better determinism. It adds a triton_attention_split_tile_size server argument. When specified, this tile size is used to calculate the number of KV splits based on sequence length, overriding the previous fixed-number-of-splits or heuristic-based strategies. The changes are implemented across triton_backend.py and server_args.py. The logic is sound, but I've pointed out a potential ZeroDivisionError if a user provides a non-positive value for the tile size and suggested adding a validation check.
Motivation
#10278
Use fixed size split-KV strategy instead of fixed # split-KV strategy for better determinism on Triton attention.
Tested with script in #10417: