[DP] fix: engine crash when decode batch is padded#8995
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've implemented a fix to prevent an engine crash that occurs when decode batches in DP attention are padded to the chunk prefill size. This crash happens because the padded decode batch's tensor shape becomes larger than the kv_indices buffer during the decode phase. My solution involves treating these specific decode batches as prefill batches during processing.
Highlights
- Addressing Engine Crash: The primary goal is to resolve an engine crash in DP attention caused by decode batches being padded, leading to tensor shape mismatches with the
kv_indicesbuffer. - Decode Batch Handling: When a decode batch is processed alongside other prefill batches and is padded to the maximum length, it is now temporarily reclassified and processed as an
EXTEND(prefill) batch. This involves adjusting itsforward_modeand related batch parameters. - Robust Memory Comparison: In
communicator.py, I've updated the tensor memory comparison fromdata_ptr()tountyped_storage()for improved accuracy when checking ifhidden_statesshares the same underlying storage asglobal_hidden_states. - Batch State Restoration: After processing, the original
forward_modeandbatch_sizeof the batch are restored to their initial decode state, ensuring proper subsequent handling.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request addresses a crash that occurs when a decode batch is padded, a situation that can arise in DP attention when processed alongside prefill batches. The proposed solution is to treat these padded decode batches as prefill batches, which seems like a sound approach. The implementation looks mostly correct, but I've identified a minor code clarity issue that should be addressed.
| self.extend_input_logprob_token_ids_gpu = ( | ||
| self.extend_input_logprob_token_ids_gpu | ||
| ) |
Motivation
In DP attention, a decode batch can be padded to the chunk prefill size when it is processed with other prefill batches, which incurs engine crash as its tensor shape is larger than the kv_indices buffer in attention during the decode phase.
Modifications
We modify the decode batch and treat it as a prefill batch when it is process with other prefill batches.
Accuracy Test
Benchmark & Profiling
Checklist