Skip to content

Make ideep honor torch.set_num_thread changes#53871

Closed
malfet wants to merge 4 commits intomasterfrom
malfet/ideep-to-respect-set-num-threads
Closed

Make ideep honor torch.set_num_thread changes#53871
malfet wants to merge 4 commits intomasterfrom
malfet/ideep-to-respect-set-num-threads

Conversation

@malfet
Copy link
Copy Markdown
Contributor

@malfet malfet commented Mar 12, 2021

When compiled with OpenMP support ideep's computational_cache would cache max number of OpenMP workers
This number could be wrong after torch.set_num_threads call, so clean it after the call.

Fixes #53565

When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers
This number could be wrong after `torch.set_num_threads` call, so clean it after the call.
@malfet malfet requested review from a team and ngimel March 12, 2021 03:42
@facebook-github-bot
Copy link
Copy Markdown
Contributor

facebook-github-bot commented Mar 12, 2021

💊 CI failures summary and remediations

As of commit 1ddae03 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 2/2 non-scanned failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Copy link
Copy Markdown
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Comment thread aten/src/ATen/native/mkldnn/IDeepRegistration.cpp Outdated
Copy link
Copy Markdown
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 13, 2021

Codecov Report

Merging #53871 (1ddae03) into master (d726ce6) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #53871      +/-   ##
==========================================
- Coverage   77.29%   77.29%   -0.01%     
==========================================
  Files        1888     1888              
  Lines      183517   183521       +4     
==========================================
  Hits       141853   141853              
- Misses      41664    41668       +4     

@facebook-github-bot
Copy link
Copy Markdown
Contributor

@malfet merged this pull request in f2689b1.

@malfet malfet deleted the malfet/ideep-to-respect-set-num-threads branch March 14, 2021 20:20
malfet added a commit to malfet/pytorch that referenced this pull request Mar 15, 2021
Summary:
When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers
This number could be wrong after `torch.set_num_threads` call, so clean it after the call.

Fixes pytorch#53565

Pull Request resolved: pytorch#53871

Reviewed By: albanD

Differential Revision: D27003265

Pulled By: malfet

fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
malfet added a commit that referenced this pull request Mar 16, 2021
Summary:
When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers
This number could be wrong after `torch.set_num_threads` call, so clean it after the call.

Fixes #53565

Pull Request resolved: #53871

Reviewed By: albanD

Differential Revision: D27003265

Pulled By: malfet

fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
laurentdupin pushed a commit to laurentdupin/pytorch that referenced this pull request Apr 24, 2026
Summary:
When compiled with OpenMP support `ideep`'s computational_cache would cache max number of OpenMP workers
This number could be wrong after `torch.set_num_threads` call, so clean it after the call.

Fixes pytorch#53565

Pull Request resolved: pytorch#53871

Reviewed By: albanD

Differential Revision: D27003265

Pulled By: malfet

fbshipit-source-id: 1d84c23070eafb3d444e09590d64f97f99ae9d36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

conv1d, conv2d, etc. causing segmentation fault on torch 1.8.0

4 participants