Make ideep honor torch.set_num_thread changes#53871
Closed
Conversation
When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers This number could be wrong after `torch.set_num_threads` call, so clean it after the call.
Contributor
💊 CI failures summary and remediationsAs of commit 1ddae03 (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
walterddr
approved these changes
Mar 12, 2021
Contributor
facebook-github-bot
left a comment
There was a problem hiding this comment.
@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Codecov Report
@@ Coverage Diff @@
## master #53871 +/- ##
==========================================
- Coverage 77.29% 77.29% -0.01%
==========================================
Files 1888 1888
Lines 183517 183521 +4
==========================================
Hits 141853 141853
- Misses 41664 41668 +4 |
Contributor
malfet
added a commit
to malfet/pytorch
that referenced
this pull request
Mar 15, 2021
Summary: When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers This number could be wrong after `torch.set_num_threads` call, so clean it after the call. Fixes pytorch#53565 Pull Request resolved: pytorch#53871 Reviewed By: albanD Differential Revision: D27003265 Pulled By: malfet fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
This was referenced Mar 15, 2021
malfet
added a commit
that referenced
this pull request
Mar 16, 2021
Summary: When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers This number could be wrong after `torch.set_num_threads` call, so clean it after the call. Fixes #53565 Pull Request resolved: #53871 Reviewed By: albanD Differential Revision: D27003265 Pulled By: malfet fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
laurentdupin
pushed a commit
to laurentdupin/pytorch
that referenced
this pull request
Apr 24, 2026
Summary: When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers This number could be wrong after `torch.set_num_threads` call, so clean it after the call. Fixes pytorch#53565 Pull Request resolved: pytorch#53871 Reviewed By: albanD Differential Revision: D27003265 Pulled By: malfet fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When compiled with OpenMP support
ideep's computational_cache would cache max number of OpenMP workersThis number could be wrong after
torch.set_num_threadscall, so clean it after the call.Fixes #53565