This is the codebase (including search) for ICLR 2020 paper AtomNAS: Fine-Grained End-to-End Neural Architecture Search.
The network configs for AtomNAS-A/B/C could be checked at apps/searched/models, where a list within inverted_residual_setting corresponce to [output_channel, num_repeat, stride, kernel_sizes, hidden_dims, has_first_pointwise].
Set the following ENV variable:
$METIS_WORKER_0_HOST: IP address of worker 0
$METIS_WORKER_0_PORT: Port used for initializing distributed environment
$METIS_TASK_INDEX: Index of task
$REMOTE_WORKER_NUM: Number of workers
$REMOTE_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$REMOTE_OUTPUT: Output directory
Set the following ENV variable:
$REMOTE_WORKER_GPU: Number of GPUs (NOTE: should exactly match local GPU numbers with `CUDA_VISIBLE_DEVICES `)
$REMOTE_OUTPUT: Output directory
For Table 1
- AtomNAS-A:
bash scripts/run.sh apps/slimming/shrink/atomnas_a.yml - AtomNAS-B:
bash scripts/run.sh apps/slimming/shrink/atomnas_b.yml - AtomNAS-C:
bash scripts/run.sh apps/slimming/shrink/atomnas_c.yml
If everything is OK, you should get similar results.
-
Requirements
- python3, pytorch 1.1+, torchvision 0.3+, pyyaml 3.13, lmdb, pyarrow, pillow (pillow-simd recommanded).
- Prepare ImageNet data following pytorch example.
- Optional: Generate lmdb dataset by
utils/lmdb_dataset.py.
-
Miscellaneous
- The codebase is a general ImageNet training framework using yaml config with several extension under
appsdir, based on PyTorch.- Support
${ENV}in yaml config. - Support
_includefor hierachy config. - Support
_defaultkey for overloading. - Support
xxx.yyy.zzzfor partial overloading.
- Support
- Command:
bash scripts/run.sh {{path_to_yaml_config}}.
- The codebase is a general ImageNet training framework using yaml config with several extension under
This repo is based on slimmable_networks and benefits from the following projects
Thanks the contributors of these repos!