Skip to content

[Gradient Compression] Simplify the implementation of warm-start#50981

Closed
wayi1 wants to merge 4 commits intogh/SciPioneer/47/basefrom
gh/SciPioneer/47/head
Closed

[Gradient Compression] Simplify the implementation of warm-start#50981
wayi1 wants to merge 4 commits intogh/SciPioneer/47/basefrom
gh/SciPioneer/47/head

Conversation

@wayi1
Copy link
Copy Markdown
Contributor

@wayi1 wayi1 commented Jan 23, 2021

Stack from ghstack:

Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: D26034418

Since PowerSGD will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback need to be rebuilt later, because their corresponding input tensors' shape wil be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Jan 23, 2021

💊 CI failures summary and remediations

As of commit 3d80e4b (more details on the Dr. CI page):


  • 3/3 failures introduced in this PR

🕵️ 3 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_build (1/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .jenkins/caffe2/test.sh
Auto-merging .jenkins/caffe2/test.sh
CONFLICT (add/add): Merge conflict in .gitmodules
Auto-merging .gitmodules
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/python_doc_push_script.sh
Auto-merging .circleci/scripts/python_doc_push_script.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_py3_6_gcc5_4_build (2/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .jenkins/caffe2/test.sh
Auto-merging .jenkins/caffe2/test.sh
CONFLICT (add/add): Merge conflict in .gitmodules
Auto-merging .gitmodules
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/python_doc_push_script.sh
Auto-merging .circleci/scripts/python_doc_push_script.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1

See CircleCI build pytorch_linux_xenial_cuda9_2_cudnn7_py3_gcc5_4_build (3/3)

Step: "(Optional) Merge target branch" (full log | diagnosis details | 🔁 rerun)

Automatic merge failed; fix conflicts and then commit the result.
CONFLICT (add/add): Merge conflict in .jenkins/caffe2/test.sh
Auto-merging .jenkins/caffe2/test.sh
CONFLICT (add/add): Merge conflict in .gitmodules
Auto-merging .gitmodules
CONFLICT (add/add): Merge conflict in .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
Auto-merging .circleci/verbatim-sources/job-specs/pytorch-job-specs.yml
CONFLICT (add/add): Merge conflict in .circleci/scripts/python_doc_push_script.sh
Auto-merging .circleci/scripts/python_doc_push_script.sh
CONFLICT (add/add): Merge conflict in .circleci/config.yml
Auto-merging .circleci/config.yml
Automatic merge failed; fix conflicts and then commit the result.


Exited with code exit status 1


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@facebook-github-bot facebook-github-bot added cla signed oncall: distributed Add this issue/PR to distributed oncall triage queue labels Jan 23, 2021
wayi1 pushed a commit that referenced this pull request Jan 23, 2021
Since PowerSGD will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback need to be rebuilt later, because their corresponding input tensors' shape wil be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)

ghstack-source-id: 120257256
Pull Request resolved: #50981
…-start"

Since PowerSGD will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback need to be rebuilt later, because their corresponding input tensors' shape wil be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)

[ghstack-poisoned]
Copy link
Copy Markdown
Contributor

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if I understand this change. If PowerSGD will be applied in the first few iterations, shouldn't we take into account bucket rebuilding? Bucket rebuilding occurs in the 2nd iteration, so it seems like powerSGD would have to be aware of this?

@wayi1
Copy link
Copy Markdown
Contributor Author

wayi1 commented Jan 26, 2021

I'm not sure if I understand this change. If PowerSGD will be applied in the first few iterations, shouldn't we take into account bucket rebuilding? Bucket rebuilding occurs in the 2nd iteration, so it seems like powerSGD would have to be aware of this?

Sorry, I mistyped the description. In the first few iterations, vanilla allreduce rather than PowerSGD will be used by default. Therefore, we can simplify the warm-start implementation here.

Actually I plan to do this for all the comm hooks in the future, so the comm hook developer won't be bothered by the internal bucketization details. The ultimate goal is to decouple engineering code and research code, and hide the internal implementation details as much as possible -- ideally, a comm hook developer does not need to know how gradients are bucketized in DDP.

@wayi1 wayi1 requested a review from rohan-varma January 26, 2021 06:48
…-start"


Since vanilla allreudce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback need to be rebuilt later, because their corresponding input tensors' shape wil be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)

[ghstack-poisoned]
wayi1 pushed a commit that referenced this pull request Jan 26, 2021
…and warm-start

Pull Request resolved: #50981

Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202
ghstack-source-id: 120398567

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)
Copy link
Copy Markdown
Contributor

@rohan-varma rohan-varma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, can you clarify in the code that we are now expecting vanilla allreduce to be run in the first few iterations so this change is clearer?

Also, would be good to update documentation in the code or anywhere else necessary that we are enforcing vanilla allreduce to be run for the first few iterations. That way comm. hook users won't be surprised that their algorithm doesn't kick in for a few iterations.

Also, please make sure CI is clear before landing.

@wayi1
Copy link
Copy Markdown
Contributor Author

wayi1 commented Jan 29, 2021

Looks good, can you clarify in the code that we are now expecting vanilla allreduce to be run in the first few iterations so this change is clearer?

Also, would be good to update documentation in the code or anywhere else necessary that we are enforcing vanilla allreduce to be run for the first few iterations. That way comm. hook users won't be surprised that their algorithm doesn't kick in for a few iterations.

Also, please make sure CI is clear before landing.

Thanks for the suggestion! This is already documented in PowerSGDState comments. Now added more comments to powerSGD_hook function.

…-start"


Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression #47202

Differential Revision: [D26034418](https://our.internmc.facebook.com/intern/diff/D26034418/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Copy Markdown
Contributor

This pull request has been merged in b619d37.

@facebook-github-bot facebook-github-bot deleted the gh/SciPioneer/47/head branch February 1, 2021 15:19
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
…and warm-start (pytorch#50981)

Summary:
Pull Request resolved: pytorch#50981

Since vanilla allreduce will to be applied in the first few iterations, bucket rebuilding process will not affect caching per-variable tensors.

Previously the cached tensors used for error feedback and warm-up need to be rebuilt later, because their corresponding input tensors' shape will be changed after the bucket rebuild process.

Original PR issue: Investigate Applying PowerSGD to Communication Hook for Gradient Compression pytorch#47202
ghstack-source-id: 120617971

Test Plan: real run

Reviewed By: rohan-varma

Differential Revision: D26034418

fbshipit-source-id: e8744431c7f3142d75b77b60110e6861c2ff5c14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed Merged oncall: distributed Add this issue/PR to distributed oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants