[Fix] Avoid infinite GPU waiting in dist training#6501
Merged
ZwwWayne merged 5 commits intoopen-mmlab:dev-v2.19.0from Nov 24, 2021
fingertap:master
Merged
[Fix] Avoid infinite GPU waiting in dist training#6501ZwwWayne merged 5 commits intoopen-mmlab:dev-v2.19.0from fingertap:master
ZwwWayne merged 5 commits intoopen-mmlab:dev-v2.19.0from
fingertap:master
Conversation
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
See #6495 for details. Here I just add an assertion before reducing the log_vars to ensure every GPU has the same length of log_vars to prevent infinite waiting.