Skip to content

fix: add trtllm-allreduce-fusion api notes and fix memory error#1229

Merged
yzh119 merged 7 commits intoflashinfer-ai:mainfrom
yyihuang:fix_ar_fusion
Jul 8, 2025
Merged

fix: add trtllm-allreduce-fusion api notes and fix memory error#1229
yzh119 merged 7 commits intoflashinfer-ai:mainfrom
yyihuang:fix_ar_fusion

Conversation

@yyihuang
Copy link
Copy Markdown
Collaborator

@yyihuang yyihuang commented Jul 7, 2025

📌 Description

We add some notes on each api to help integration to vllm/sgl.
And we try to fix the error in issues.

To avoid workspace size overflow, the max lamport communication size MAX_COMM_SIZE (computed as hidden * max_token) should be less than round_down(INT_32_MAX, 2MB). For any larger size, it would be rewritten to this value with a warning "warning: lamport_comm_size 2147483648 is greater than MAX_COMM_SIZE 2145386496, set to MAX_COMM_SIZE".

If the actual lamport communication size (computed as hidden * token_num) exceeds this MAX_COMM_SIZE, always set use_oneshot to be False.
Otherwise, you could set use_oneshot on your preference; and for the min-latency case, set it to be (token_num <= 128).

🔍 Related Issues

#1223

sgl-project/sglang#7621

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yyihuang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the robustness and clarity of the trtllm_allreduce_fusion API. It enhances the API's documentation to provide clearer guidance on its usage and implements a critical runtime check to prevent a specific memory error, thereby making the API more reliable for downstream integrations.

Highlights

  • API Documentation Enhancement: Added detailed usage notes to the trtllm_allreduce_fusion API, specifically guiding when the use_oneshot parameter should be enabled or disabled based on strategy preference, latency mode, and sequence length, to aid integration with systems like vLLM/SGL.
  • Memory Error Prevention: Introduced a runtime assertion within the trtllm_allreduce_fusion function to ensure that the input sequence length is greater than the world_size when use_oneshot is disabled. This change directly addresses a reported memory error, likely related to issue #1223.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds API notes for trtllm_allreduce_fusion and includes a check to address a potential memory error. The review identifies a bug in the validation logic and suggests improvements to the docstring formatting.

Comment thread flashinfer/comm/trtllm_ar.py Outdated
Comment thread flashinfer/comm/trtllm_ar.py
@yyihuang yyihuang requested a review from yzh119 July 8, 2025 00:37
Comment thread flashinfer/comm/trtllm_ar.py Outdated
@yyihuang yyihuang requested a review from yzh119 July 8, 2025 03:27
Comment thread flashinfer/comm/trtllm_ar.py Outdated
@yyihuang yyihuang requested a review from yzh119 July 8, 2025 03:56
Copy link
Copy Markdown
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for the quick fix!

@yzh119 yzh119 merged commit 4b0e0fb into flashinfer-ai:main Jul 8, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants