[Kernels] Enable Torch Symmetric Memory All-Reduce By Default#24111
Merged
simon-mo merged 4 commits intovllm-project:mainfrom Sep 11, 2025
Merged
[Kernels] Enable Torch Symmetric Memory All-Reduce By Default#24111simon-mo merged 4 commits intovllm-project:mainfrom
simon-mo merged 4 commits intovllm-project:mainfrom
Conversation
Add benchmark Signed-off-by: ilmarkov <markovilya197@gmail.com>
Signed-off-by: ilmarkov <markovilya197@gmail.com>
Contributor
|
@ilmarkov can torch symm memory also be used to write Alltoall/AllGather/ReduceScatter? We currently see that those comm ops are quite slow when Attention DP is used, and I wonder if we can easily extend these AR implementations for those ops. Thanks! cc @weireweire |
Contributor
Author
mgoin
reviewed
Sep 10, 2025
Comment on lines
+34
to
+36
| # add options for testing | ||
| force_multimem: Optional[bool] = None, | ||
| max_size_override: Optional[int] = None): |
Member
There was a problem hiding this comment.
nit: these don't actually look to be used?
Contributor
Author
There was a problem hiding this comment.
We use these options in the allreduce benchmark.
Member
|
#24694 |
skyloevil
pushed a commit
to skyloevil/vllm
that referenced
this pull request
Sep 13, 2025
…roject#24111) Signed-off-by: ilmarkov <markovilya197@gmail.com> Co-authored-by: Michael Goin <mgoin64@gmail.com>
dsxsteven
pushed a commit
to dsxsteven/vllm_splitPR
that referenced
this pull request
Sep 15, 2025
…roject#24111) Signed-off-by: ilmarkov <markovilya197@gmail.com> Co-authored-by: Michael Goin <mgoin64@gmail.com>
ABC12345anouys
pushed a commit
to ABC12345anouys/vllm
that referenced
this pull request
Sep 25, 2025
…roject#24111) Signed-off-by: ilmarkov <markovilya197@gmail.com> Co-authored-by: Michael Goin <mgoin64@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Enable torch symm memory for TP allreduce by default.
Add an testing option to custom allreduce to choose between one shot and two shot algos
Add a benchmark to compare nccl, custom allreduce and torch symm mem allreduce
The dispatching of the algorithms is done based on input size for cuda Hopper and Blackwell devices reaching the best performance out of the existing allreduce algorithms.
E2E results are presented in the original PR #20759
(Up to 10% TTFT improvement for TP=8)
Isolated primitives benchmark results:
H100
In TP=4 case for input sizes between 1 and 32MB we could use
symm_mem_multimembut its performance is close to CA 2 stage so we use CA for all inputs below 32MB.TP=8 to be updated
B200