Fix potential memory fault issue and ncclSystemError in CI test#8681
Fix potential memory fault issue and ncclSystemError in CI test#8681zhyncs merged 12 commits intosgl-project:mainfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @kkHuang-amd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
I've addressed a potential memory fault issue that was causing random CI test failures, specifically related to GPU memory. The problem stemmed from the "aiter mha prefill" backend needing to read 128 data points at once, where the pending data wasn't always reliable. My fix ensures that the padding part of the "kv_indices" tensor is assigned reliable data, preventing these memory faults.
Highlights
- Memory Pool Size Removal: I removed the "self.pool_size" attribute, as it was no longer needed for the updated "kv_indices" initialization.
- Efficient Tensor Initialization: I changed the initialization of "kv_indices" from "torch.full" to "torch.empty", which is more appropriate given the subsequent assignment of values.
- Reliable Padding Data: I added a crucial step to explicitly assign reliable data to the padding section of the "kv_indices" tensor, specifically copying the first element to the trailing part, which resolves the memory fault.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request aims to fix a potential memory fault in the Aiter MHA prefill backend by ensuring that padding data for kv_indices is reliable. The changes involve removing an unused self.pool_size attribute, switching from torch.full to torch.empty for performance, and filling the padding part of kv_indices with a reliable value from the tensor itself. While the approach is sound, I've identified a critical edge case where kv_indices could be read before being initialized, which could lead to memory errors. I've provided a suggestion to address this.
| token_num = kv_indptr[-1] | ||
| kv_indices[token_num:] = kv_indices[0] |
There was a problem hiding this comment.
There is a potential issue here when token_num is 0. If paged_kernel_lens_sum is 0 (e.g., an empty batch or a batch with 0-length sequences), token_num will be 0. In this case, create_flashinfer_kv_indices_triton will not write to kv_indices, and kv_indices[0] will access uninitialized memory from torch.empty. This can lead to memory faults or unpredictable behavior.
You should handle the case where token_num is 0 to ensure kv_indices is always initialized with a safe value.
| token_num = kv_indptr[-1] | |
| kv_indices[token_num:] = kv_indices[0] | |
| token_num = kv_indptr[-1] | |
| if token_num > 0: | |
| kv_indices[token_num:] = kv_indices[0] | |
| elif bs > 0: | |
| safe_index = self.req_to_token[req_pool_indices[0], 0].item() | |
| kv_indices.fill_(safe_index) | |
| else: | |
| kv_indices.fill_(0) |
…project#8681) Co-authored-by: wunhuang <wunhuang@amd.com>
…project#8681) Co-authored-by: wunhuang <wunhuang@amd.com>
co-worker: @hubertlu-tw
Motivation
We see random CI test case failed and the failed reason is caused by GPU memory fault.
In addition, this PR introduced more NCCL communicators: #8590 which will cause the current CI to fail due to a lack of
--shm-size.Modifications
Aiter mha prefill backend needs read the 128 data once, so the pending data needs to ensure the value is reliable.
Assign the reliable data to the padding part.
Checklist