[FSDP][Collectives] skipping reduce_scatter when world size is 1#160136
Closed
anshul-si wants to merge 6 commits intogh/anshul-si/17/basefrom
Closed
[FSDP][Collectives] skipping reduce_scatter when world size is 1#160136anshul-si wants to merge 6 commits intogh/anshul-si/17/basefrom
anshul-si wants to merge 6 commits intogh/anshul-si/17/basefrom
Conversation
[ghstack-poisoned]
…e is 1" cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
…e is 1" cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
This was referenced Aug 11, 2025
weifengpy
reviewed
Aug 12, 2025
weifengpy
reviewed
Aug 12, 2025
weifengpy
reviewed
Aug 12, 2025
…e is 1" cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
This was referenced Aug 12, 2025
This was referenced Aug 12, 2025
…e is 1" **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
This was referenced Aug 13, 2025
weifengpy
reviewed
Aug 18, 2025
weifengpy
approved these changes
Aug 18, 2025
Contributor
Author
|
@pytorchbot merge |
Collaborator
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
pytorchmergebot
pushed a commit
that referenced
this pull request
Aug 20, 2025
…vice movements (#160147) **Summary:** In order to ensure that replicate acts as intended (a specialized version of hsdp) we need to make sure that it can pass the same tests that fully_shard can for training. To this end, I have added three test cases, one to test input device movement and the other two to test parameter registration during the forward and backward pass of a model. **Test Cases** 1. pytest test/distributed/_composable/test_replicate_training.py -k test_root_move_forward_input_to_device 2. pytest test/distributed/_composable/test_replicate_training.py -k TestReplicateRegisteredParams Pull Request resolved: #160147 Approved by: https://github.com/weifengpy ghstack dependencies: #160135, #160136
Collaborator
|
@pytorchbot revert -m "Sorry, but looks like this broke ROCm distributed CI" -c nosignal @pragupta can provide some more triage details |
Collaborator
|
@pytorchbot successfully started a revert job. Check the current status here. |
pytorchmergebot
added a commit
that referenced
this pull request
Aug 21, 2025
…input device movements (#160147)" This reverts commit a3a82e3. Reverted #160147 on behalf of https://github.com/jithunnair-amd due to Sorry, but looks like this broke ROCm distributed CI ([comment](#160136 (comment)))
pytorchmergebot
added a commit
that referenced
this pull request
Aug 21, 2025
…s 1 (#160136)" This reverts commit 3d126e1. Reverted #160136 on behalf of https://github.com/jithunnair-amd due to Sorry, but looks like this broke ROCm distributed CI ([comment](#160136 (comment)))
Collaborator
|
@anshul-si your PR has been successfully reverted. |
mori360
approved these changes
Aug 21, 2025
anshul-si
added a commit
to anshul-si/pytorch
that referenced
this pull request
Aug 21, 2025
ghstack-source-id: c19e06f Pull Request resolved: pytorch#160136
can-gaa-hou
pushed a commit
to can-gaa-hou/pytorch
that referenced
this pull request
Aug 22, 2025
…orch#160136) **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading Pull Request resolved: pytorch#160136 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135
anshul-si
added a commit
to anshul-si/pytorch
that referenced
this pull request
Aug 26, 2025
…orch#160136) **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading Pull Request resolved: pytorch#160136 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135
anshul-si
added a commit
to anshul-si/pytorch
that referenced
this pull request
Aug 26, 2025
…vice movements (pytorch#160147) **Summary:** In order to ensure that replicate acts as intended (a specialized version of hsdp) we need to make sure that it can pass the same tests that fully_shard can for training. To this end, I have added three test cases, one to test input device movement and the other two to test parameter registration during the forward and backward pass of a model. **Test Cases** 1. pytest test/distributed/_composable/test_replicate_training.py -k test_root_move_forward_input_to_device 2. pytest test/distributed/_composable/test_replicate_training.py -k TestReplicateRegisteredParams Pull Request resolved: pytorch#160147 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135, pytorch#160136
…e is 1" **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta [ghstack-poisoned]
anshul-si
added a commit
to anshul-si/pytorch
that referenced
this pull request
Sep 2, 2025
…orch#160136) **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading Pull Request resolved: pytorch#160136 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135
anshul-si
added a commit
to anshul-si/pytorch
that referenced
this pull request
Sep 2, 2025
…vice movements (pytorch#160147) **Summary:** In order to ensure that replicate acts as intended (a specialized version of hsdp) we need to make sure that it can pass the same tests that fully_shard can for training. To this end, I have added three test cases, one to test input device movement and the other two to test parameter registration during the forward and backward pass of a model. **Test Cases** 1. pytest test/distributed/_composable/test_replicate_training.py -k test_root_move_forward_input_to_device 2. pytest test/distributed/_composable/test_replicate_training.py -k TestReplicateRegisteredParams Pull Request resolved: pytorch#160147 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135, pytorch#160136
markc-614
pushed a commit
to markc-614/pytorch
that referenced
this pull request
Sep 17, 2025
…orch#160136) **Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command. **Test Cases** 1. pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_single_worldsize1 2. pytest test/distributed/_composable/test_composability/test_2d_composability.py -k test_tp_with_fsdp_offloading Pull Request resolved: pytorch#160136 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135
markc-614
pushed a commit
to markc-614/pytorch
that referenced
this pull request
Sep 17, 2025
…vice movements (pytorch#160147) **Summary:** In order to ensure that replicate acts as intended (a specialized version of hsdp) we need to make sure that it can pass the same tests that fully_shard can for training. To this end, I have added three test cases, one to test input device movement and the other two to test parameter registration during the forward and backward pass of a model. **Test Cases** 1. pytest test/distributed/_composable/test_replicate_training.py -k test_root_move_forward_input_to_device 2. pytest test/distributed/_composable/test_replicate_training.py -k TestReplicateRegisteredParams Pull Request resolved: pytorch#160147 Approved by: https://github.com/weifengpy ghstack dependencies: pytorch#160135, pytorch#160136
markc-614
pushed a commit
to markc-614/pytorch
that referenced
this pull request
Sep 17, 2025
…input device movements (pytorch#160147)" This reverts commit a3a82e3. Reverted pytorch#160147 on behalf of https://github.com/jithunnair-amd due to Sorry, but looks like this broke ROCm distributed CI ([comment](pytorch#160136 (comment)))
markc-614
pushed a commit
to markc-614/pytorch
that referenced
this pull request
Sep 17, 2025
…s 1 (pytorch#160136)" This reverts commit 3d126e1. Reverted pytorch#160136 on behalf of https://github.com/jithunnair-amd due to Sorry, but looks like this broke ROCm distributed CI ([comment](pytorch#160136 (comment)))
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary: In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_collectives to skip reduce_scatter in the foreach_reduce API when world_size = 1. I have created edited a test that uses CommDebugMode to verify that the reduce_scatter has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. I have also added a test command.
Test Cases
Stack from ghstack (oldest at bottom):
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @pragupta