Skip to content

[DP] fix: engine crash when decode batch is padded#8995

Merged
ch-wan merged 4 commits intomainfrom
cheng/fix/dp
Aug 9, 2025
Merged

[DP] fix: engine crash when decode batch is padded#8995
ch-wan merged 4 commits intomainfrom
cheng/fix/dp

Conversation

@ch-wan
Copy link
Copy Markdown
Collaborator

@ch-wan ch-wan commented Aug 9, 2025

Motivation

In DP attention, a decode batch can be padded to the chunk prefill size when it is processed with other prefill batches, which incurs engine crash as its tensor shape is larger than the kv_indices buffer in attention during the decode phase.

Modifications

We modify the decode batch and treat it as a prefill batch when it is process with other prefill batches.

Accuracy Test

Benchmark & Profiling

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

I've implemented a fix to prevent an engine crash that occurs when decode batches in DP attention are padded to the chunk prefill size. This crash happens because the padded decode batch's tensor shape becomes larger than the kv_indices buffer during the decode phase. My solution involves treating these specific decode batches as prefill batches during processing.

Highlights

  • Addressing Engine Crash: The primary goal is to resolve an engine crash in DP attention caused by decode batches being padded, leading to tensor shape mismatches with the kv_indices buffer.
  • Decode Batch Handling: When a decode batch is processed alongside other prefill batches and is padded to the maximum length, it is now temporarily reclassified and processed as an EXTEND (prefill) batch. This involves adjusting its forward_mode and related batch parameters.
  • Robust Memory Comparison: In communicator.py, I've updated the tensor memory comparison from data_ptr() to untyped_storage() for improved accuracy when checking if hidden_states shares the same underlying storage as global_hidden_states.
  • Batch State Restoration: After processing, the original forward_mode and batch_size of the batch are restored to their initial decode state, ensuring proper subsequent handling.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a crash that occurs when a decode batch is padded, a situation that can arise in DP attention when processed alongside prefill batches. The proposed solution is to treat these padded decode batches as prefill batches, which seems like a sound approach. The implementation looks mostly correct, but I've identified a minor code clarity issue that should be addressed.

Comment on lines +671 to +673
self.extend_input_logprob_token_ids_gpu = (
self.extend_input_logprob_token_ids_gpu
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This assignment to self.extend_input_logprob_token_ids_gpu is a no-op as it assigns the attribute to itself. This is likely a remnant from debugging or a copy-paste error and should be removed for code clarity.

@ch-wan ch-wan changed the title [DP] [wip] fix: engine crash when decode batch is padded [DP] fix: engine crash when decode batch is padded Aug 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant