[Fix] fix allreduce bug in Piecewise Graph#12106
Merged
ispobock merged 3 commits intosgl-project:mainfrom Oct 26, 2025
Merged
Conversation
Contributor
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Collaborator
Author
ispobock
approved these changes
Oct 25, 2025
Collaborator
|
When I run |
Collaborator
Author
|
@ispobock The newest commit should fix this problem. I am not sure about the graph generated by |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

Motivation
Previously, When we enable piecewise-cuda-graph,
tp>1This is the command I use to test
Related PR #11845 #10062
Modifications
There is two bugs here.
The first one is when we disable custom allreduce, we will get error while capturing piecewise cuda graph.
I think this might be that torch.compile cannot include nccl in the graph. Therefore, while I use
sglang.inplace_all_reduceto split graph.sglang.inplace_all_reducestands for NCCL whilesglang.outplace_all_reducestands for others like customer all reduce.sglang.outplace_all_reduceto split graph, sincesglang.outplace_all_reducewill create a new tensor everything, while cuda graph need the input tensor to be fixed.The second one is that we will get illegal memory for large message size.
I think this is because for customer all reduce, the message size cannot be too large.
Accuracy Tests
Benchmarking and Profiling
Checklist