[Feature] Support ConvNeXt#1216
Conversation
|
In our refactor code, (NCHW -> permute to NHWC -> PyTorch LN -> linear layers -> layer scale -> permute back to NCHW) takes over (NCHW -> custom LN -> 1x1 convs -> layer scale), the former style may be faster. Reference: facebookresearch/ConvNeXt#18. |
xvjiarui
left a comment
There was a problem hiding this comment.
Please refer to ResNet for some details
Codecov Report
@@ Coverage Diff @@
## master #1216 +/- ##
==========================================
+ Coverage 90.27% 90.32% +0.04%
==========================================
Files 131 132 +1
Lines 7621 7699 +78
Branches 1267 1290 +23
==========================================
+ Hits 6880 6954 +74
Misses 531 531
- Partials 210 214 +4
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
|
Please look at open-mmlab/mmpretrain#670 |
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
MeowZheng
left a comment
There was a problem hiding this comment.
might add metafile and readme
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
configs/convnext/upernet_convnext_tiny_fp16_512x512_160k_ade20k.py
Outdated
Show resolved
Hide resolved
configs/convnext/upernet_convnext_small_fp16_512x512_160k_ade20k.py
Outdated
Show resolved
Hide resolved
configs/convnext/upernet_convnext_xlarge_fp16_640x640_160k_ade20k.py
Outdated
Show resolved
Hide resolved
tests/test_core/test_learning_rate_decay_optimizer_constructor.py
Outdated
Show resolved
Hide resolved
* upload original backbone and configs * ConvNext Refactor * ConvNext Refactor * convnext customization refactor with mmseg style * convnext customization refactor with mmseg style * add ade20k_640x640.py * upload files for training * delete dist_optimizer_hook and remove layer_decay_optimizer_constructor * check max(out_indices) < num_stages * add unittest * fix lint error * use MMClassification backbone * fix bugs in base_1k * add mmcls in requirements/mminstall.txt * add mmcls in requirements/mminstall.txt * fix drop_path_rate and layer_scale_init_value * use logger.info instead of print * add mmcls in runtime.txt * fix f string && delete * add doctring in LearningRateDecayOptimizerConstructor and fix mmcls version in requirements * fix typo in LearningRateDecayOptimizerConstructor * use ConvNext models in unit test for LearningRateDecayOptimizerConstructor * add unit test * fix typo * fix typo * add layer_wise and fix redundant backbone.downsample_norm in it * fix unit test * give a ground truth lr_scale and weight_decay * upload models and readme * delete 'backbone.stem_norm' and 'backbone.downsample_norm' in get_num_layer() * fix unit test and use mmcls url * update md2yml.py and metafile * fix typo
* make sure fp16 runs well * add fp16 test for superes * Update src/diffusers/models/unet_2d.py Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * gen on cuda * always run fast inferecne test on cpu * run on cpu Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Motivation
Support ConvNext.
Paper: https://arxiv.org/abs/2201.03545
Github: https://github.com/facebookresearch/ConvNeXt
Modification
Need some refactor. WIP.
mmcv_customecode and align inference metric.