Avoid nested CommTensor wrapping#84963
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/84963
Note: Links to docs will display an error until the docs builds have been completed. ✅ No Failures, 11 PendingAs of commit 61e89db: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
wanchaol
left a comment
There was a problem hiding this comment.
have one comment about testing
| work = dist.all_reduce(CommTensor(tensor), group=group, async_op=True) | ||
| return work, tensor | ||
|
|
||
| self._test_work_wait(tensor, comm_fn=comm_fn) |
There was a problem hiding this comment.
shouldn't we create a CommTensor here before passing to comm_fn to verify the changes?
|
@pytorchbot merge -g |
|
@pytorchbot successfully started a merge job. Check the current status here. |
Pull Request resolved: #84963 Approved by: https://github.com/wanchaol
Stack from ghstack (oldest at bottom):