Skip to content

[Enhance] Add setup multi-processing both in train and test.#7036

Merged
ZwwWayne merged 2 commits intoopen-mmlab:devfrom
RangiLyu:speedup_v3
Jan 25, 2022
Merged

[Enhance] Add setup multi-processing both in train and test.#7036
ZwwWayne merged 2 commits intoopen-mmlab:devfrom
RangiLyu:speedup_v3

Conversation

@RangiLyu
Copy link
Copy Markdown
Member

Motivation

Add setup multi-processing both in train and test.
Add unit tests.

@codecov
Copy link
Copy Markdown

codecov bot commented Jan 19, 2022

Codecov Report

Merging #7036 (449a86c) into dev (4b87ddc) will increase coverage by 0.03%.
The diff coverage is 95.65%.

Impacted file tree graph

@@            Coverage Diff             @@
##              dev    #7036      +/-   ##
==========================================
+ Coverage   62.39%   62.42%   +0.03%     
==========================================
  Files         329      330       +1     
  Lines       26176    26199      +23     
  Branches     4432     4436       +4     
==========================================
+ Hits        16332    16355      +23     
- Misses       8974     8975       +1     
+ Partials      870      869       -1     
Flag Coverage Δ
unittests 62.40% <95.65%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
mmdet/utils/setup_env.py 95.45% <95.45%> (ø)
mmdet/utils/__init__.py 100.00% <100.00%> (ø)
mmdet/models/dense_heads/base_dense_head.py 88.70% <0.00%> (-1.70%) ⬇️
mmdet/core/bbox/assigners/max_iou_assigner.py 73.68% <0.00%> (+1.31%) ⬆️
mmdet/core/bbox/samplers/sampling_result.py 74.60% <0.00%> (+1.58%) ⬆️
mmdet/models/roi_heads/mask_heads/maskiou_head.py 89.65% <0.00%> (+2.29%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4b87ddc...449a86c. Read the comment docs.

# setup OMP threads
# This code is referred from https://github.com/pytorch/pytorch/blob/master/torch/distributed/run.py # noqa
if 'OMP_NUM_THREADS' not in os.environ and cfg.data.workers_per_gpu > 1:
omp_num_threads = 1
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we also add an variable in cfg named omp_num_threads and mkl_num_threads? Because in some other repos like MMOCR, setting it as 1 slows down the training speed.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two are environment variables. It would be better to set them in command-line instead of the config.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, we should print this information out in the log. We can merge this PR for now and update log in new version.

@ZwwWayne ZwwWayne merged commit ebf7476 into open-mmlab:dev Jan 25, 2022
chhluo pushed a commit to chhluo/mmdetection that referenced this pull request Feb 21, 2022
…lab#7036)

* [Enhance] Add setup multi-processing both in train and test.

* switch to torch mp
ZwwWayne pushed a commit that referenced this pull request Jul 18, 2022
* [Enhance] Add setup multi-processing both in train and test.

* switch to torch mp
ZwwWayne pushed a commit to ZwwWayne/mmdetection that referenced this pull request Jul 19, 2022
…lab#7036)

* [Enhance] Add setup multi-processing both in train and test.

* switch to torch mp
@RangiLyu RangiLyu deleted the speedup_v3 branch December 17, 2022 03:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants