[Enhance] Add setup multi-processing both in train and test.#7036
[Enhance] Add setup multi-processing both in train and test.#7036ZwwWayne merged 2 commits intoopen-mmlab:devfrom
Conversation
Codecov Report
@@ Coverage Diff @@
## dev #7036 +/- ##
==========================================
+ Coverage 62.39% 62.42% +0.03%
==========================================
Files 329 330 +1
Lines 26176 26199 +23
Branches 4432 4436 +4
==========================================
+ Hits 16332 16355 +23
- Misses 8974 8975 +1
+ Partials 870 869 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
| # setup OMP threads | ||
| # This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa | ||
| if 'OMP_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1: | ||
| omp_num_threads = 1 |
There was a problem hiding this comment.
can we also add an variable in cfg named omp_num_threads and mkl_num_threads? Because in some other repos like MMOCR, setting it as 1 slows down the training speed.
There was a problem hiding this comment.
These two are environment variables. It would be better to set them in command-line instead of the config.
There was a problem hiding this comment.
Ideally, we should print this information out in the log. We can merge this PR for now and update log in new version.
…lab#7036) * [Enhance] Add setup multi-processing both in train and test. * switch to torch mp
* [Enhance] Add setup multi-processing both in train and test. * switch to torch mp
…lab#7036) * [Enhance] Add setup multi-processing both in train and test. * switch to torch mp
Motivation
Add setup multi-processing both in train and test.
Add unit tests.