Skip to content

Support dist.all_to_all_single#8064

Merged
zpcore merged 3 commits intomasterfrom
piz/all-to-all
Sep 25, 2024
Merged

Support dist.all_to_all_single#8064
zpcore merged 3 commits intomasterfrom
piz/all-to-all

Conversation

@zpcore
Copy link
Copy Markdown
Member

@zpcore zpcore commented Sep 24, 2024

Support to use torch.distributed.all_to_all_single to both dynamo and nondynamo case.

Note that there is a function signature mismatch between torch's all_to_all_single and xla op's AllToAll. To leverage the AllToAll op, we doesn't support specifying the input_split_sizes and output_split_sizes at this time. Check test_collective_ops_tpu.py for the usage.

@zpcore zpcore marked this pull request as ready for review September 24, 2024 21:08
@zpcore zpcore requested a review from will-cromar September 24, 2024 21:08
@zpcore zpcore requested a review from JackCaoG September 24, 2024 21:30
@will-cromar
Copy link
Copy Markdown
Collaborator

Thanks!

@zpcore zpcore merged commit b378a28 into master Sep 25, 2024
@zpcore zpcore deleted the piz/all-to-all branch September 25, 2024 21:54
zpcore added a commit that referenced this pull request Sep 26, 2024
@miladm miladm added the usability Bugs/features related to improving the usability of PyTorch/XLA label Nov 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

usability Bugs/features related to improving the usability of PyTorch/XLA

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants