[Gradient Compression] Explicitly restrict the scope of torch.cuda.synchronize to the current device#49711
Closed
wayi1 wants to merge 3 commits intogh/SciPioneer/41/basefrom
Closed
[Gradient Compression] Explicitly restrict the scope of torch.cuda.synchronize to the current device#49711wayi1 wants to merge 3 commits intogh/SciPioneer/41/basefrom
wayi1 wants to merge 3 commits intogh/SciPioneer/41/basefrom
Conversation
…nchronize to the current device `torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability. Differential Revision: [D25672267](https://our.internmc.facebook.com/intern/diff/D25672267/) [ghstack-poisoned]
Contributor
💊 CI failures summary and remediationsAs of commit b288aee (more details on the Dr. CI page):
🕵️ 3 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
…rch.cuda.synchronize to the current device" `torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability. Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202 Differential Revision: [D25672267](https://our.internmc.facebook.com/intern/diff/D25672267/) [ghstack-poisoned]
…rch.cuda.synchronize to the current device" `torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability. Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202 Differential Revision: [D25672267](https://our.internmc.facebook.com/intern/diff/D25672267/) [ghstack-poisoned]
rohan-varma
approved these changes
Dec 23, 2020
Contributor
|
This pull request has been merged in 88c33ff. |
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
…nchronize to the current device (pytorch#49711) Summary: Pull Request resolved: pytorch#49711 `torch.cuda.synchronize` uses the current device by default. Explicitly specify this device for better readability. Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression pytorch#47202 ghstack-source-id: 119017654 Test Plan: buck test mode/dev-nosan caffe2/test/distributed:c10d -- test_powerSGD_ddp_comm_hook_nccl buck test mode/dev-nosan caffe2/test/distributed:distributed_nccl_fork -- test_DistributedDataParallel_powerSGD_ddp_comm_hook Reviewed By: rohan-varma Differential Revision: D25672267 fbshipit-source-id: 62a2266727a2ea76175f3c438daf20951091c771
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack:
torch.cuda.synchronizeuses the current device by default. Explicitly specify this device for better readability.Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
Differential Revision: D25672267