Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add DMCP and fix the deploy pipeline of NAS algorithms #406

Merged
merged 69 commits into from
Mar 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
69 commits
Select commit Hold shift + click to select a range
6bd286b
Copybook
Lxtccc Dec 22, 2022
6fc9f78
Newly created copy PR
Lxtccc Dec 22, 2022
55d96c3
Newly created copy PR
Lxtccc Dec 22, 2022
26da4d2
update op_counters
Lxtccc Dec 22, 2022
0058373
update subnet/commit/FLOPsCounter
Lxtccc Jan 2, 2023
f4adf90
update docs/UT
Lxtccc Jan 4, 2023
f16ba55
update docs/UT
Lxtccc Jan 4, 2023
864d08c
add setter for current_mask
Lxtccc Jan 5, 2023
b41acc3
replace current_mask with activated_tensor_channel
Lxtccc Jan 6, 2023
41b540c
update subnet training
Lxtccc Jan 6, 2023
5991046
fix ci
Lxtccc Jan 9, 2023
c6a01c3
fix ci
Lxtccc Jan 9, 2023
8c4d8ea
fix ci
Lxtccc Jan 9, 2023
bc4bd39
fix readme.md
Lxtccc Jan 9, 2023
406cdbc
fix readme.md
Lxtccc Jan 9, 2023
7830ed9
fix conflict
Lxtccc Jan 9, 2023
20b2bcb
update
Lxtccc Jan 9, 2023
60cc3ce
fix expression
Lxtccc Jan 9, 2023
cf81f88
update
Lxtccc Jan 9, 2023
c9715c0
fix CI
Lxtccc Jan 10, 2023
fb7f671
fix UT
Lxtccc Jan 10, 2023
48415fe
Merge remote-tracking branch 'upgrade/dev-1.x' into DMCP
Lxtccc Jan 30, 2023
f58826c
fix ci
Lxtccc Feb 1, 2023
c1ce2ac
fix arch YAMLs
Lxtccc Feb 1, 2023
0cba028
fix yapf
Lxtccc Feb 1, 2023
a9299cb
revise mmcv version<=2.0.0rc3
Lxtccc Feb 1, 2023
4bca1f7
fix build.yaml
Lxtccc Feb 1, 2023
117e1e6
Rollback mmdet to v3.0.0rc5
Lxtccc Feb 1, 2023
7051a48
Rollback mmdet to v3.0.0rc5
Lxtccc Feb 1, 2023
dd00ce7
Rollback mmseg to v1.0.0rc4
Lxtccc Feb 1, 2023
4c7023b
remove search_groups in mutator
Lxtccc Feb 2, 2023
6d5ad90
fix conflict
Lxtccc Feb 2, 2023
6eeebc7
revert env change
Lxtccc Feb 2, 2023
fd183f9
update usage of sub_model
Lxtccc Feb 7, 2023
20779c7
fix UT
Lxtccc Feb 8, 2023
8f06587
fix bignas config
Lxtccc Feb 8, 2023
630359f
fix UT for dcff & registry
Lxtccc Feb 8, 2023
2328697
update Ut&channel_mutator
Lxtccc Feb 15, 2023
882c136
fix test_channel_mutator
Lxtccc Feb 15, 2023
d1afbaf
fix Ut
Lxtccc Feb 15, 2023
5dd0aa6
fix bug for load dcffnet
Lxtccc Feb 15, 2023
58df15f
update nas config
Lxtccc Feb 16, 2023
31b780a
update nas config
Lxtccc Feb 16, 2023
2b9c8a6
fix api in evolution_search_loop
Feb 17, 2023
8562854
update evolu_search_loop
Lxtccc Feb 17, 2023
d8264a0
update evolu_search_loop
Lxtccc Feb 17, 2023
a06b823
fix metric_predictor
Feb 17, 2023
6be8362
Merge branch 'dev-1.x' into DMCP
Feb 17, 2023
7a72392
update url
Lxtccc Feb 17, 2023
e6eda89
Merge branch 'DMCP' of https://github.com/Lxtccc/mmrazor into DMCP
Lxtccc Feb 17, 2023
d239f1d
fix a0 fine_grained
Feb 23, 2023
cc1acb1
fix subnet export misskey
Feb 28, 2023
e0fabc3
fix ofa yaml
Feb 28, 2023
5510cf8
fix lint
aptsunny Feb 28, 2023
31a062e
fix comments
Mar 1, 2023
e9c2c9f
merge dev-1.x into dmcp
Mar 1, 2023
450c885
add autoformer cfg
Mar 1, 2023
0efa384
update readme
Mar 1, 2023
a4cbf32
fix error
Mar 1, 2023
b950750
update supernet link
Mar 1, 2023
e98846d
fix sub_model configs
Mar 1, 2023
ca39542
update subnet inference readme
Mar 1, 2023
64a0589
fix lint
aptsunny Mar 1, 2023
c1bc1b9
Merge branch 'dev-1.x' into DMCP
aptsunny Mar 2, 2023
7a45886
fix lint
aptsunny Mar 2, 2023
83cec1c
Update autoformer_subnet_8xb256_in1k.py
sunnyxiaohu Mar 2, 2023
86b50fa
update test.py to support args.checkpoint as none
Mar 2, 2023
573fa42
update DARTS readme
Mar 2, 2023
7d57599
update readme
Mar 2, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions configs/_base_/settings/imagenet_bs2048_dmcp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# dataset settings
dataset_type = 'mmcls.ImageNet'

max_search_epochs = 100
# learning rate setting
param_scheduler = [
# warm up learning rate scheduler
dict(
type='LinearLR',
start_factor=0.5,
by_epoch=True,
begin=0,
end=10,
convert_to_iter_based=True),
dict(
type='CosineAnnealingLR',
T_max=max_search_epochs,
eta_min=0.08,
by_epoch=True,
begin=10,
end=max_search_epochs,
convert_to_iter_based=True),
]

# optimizer setting
paramwise_cfg = dict(norm_decay_mult=0.0, bias_decay_mult=0.0)

optim_wrapper = dict(
constructor='mmrazor.SeparateOptimWrapperConstructor',
architecture=dict(
type='OptimWrapper',
optimizer=dict(type='SGD', lr=0.5, momentum=0.9, weight_decay=3e-4),
paramwise_cfg=paramwise_cfg),
mutator=dict(
type='OptimWrapper',
optimizer=dict(type='Adam', lr=0.5, weight_decay=1e-3)))

# data preprocessor
data_preprocessor = dict(
type='mmcls.ClsDataPreprocessor',
# RGB format normalization parameters
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
# convert image from BGR to RGB
to_rgb=True,
)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='RandomResizedCrop', scale=224),
dict(type='ColorJitter', brightness=0.2, contrast=0.2, saturation=0.2),
dict(type='RandomFlip', prob=0.5, direction='horizontal'),
dict(type='PackClsInputs'),
]

test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='ResizeEdge', scale=256, edge='short'),
dict(type='CenterCrop', crop_size=224),
dict(type='PackClsInputs'),
]

train_dataloader = dict(
batch_size=64,
num_workers=4,
dataset=dict(
type=dataset_type,
data_root='data/imagenet',
ann_file='meta/train.txt',
data_prefix='train',
pipeline=train_pipeline),
sampler=dict(type='DefaultSampler', shuffle=True, _scope_='mmcls'),
persistent_workers=True,
)

val_dataloader = dict(
batch_size=64,
num_workers=4,
dataset=dict(
type=dataset_type,
data_root='data/imagenet',
ann_file='meta/val.txt',
data_prefix='val',
pipeline=test_pipeline),
sampler=dict(type='DefaultSampler', shuffle=True, _scope_='mmcls'),
persistent_workers=True,
)
val_evaluator = dict(type='mmcls.Accuracy', topk=(1, 5))

# If you want standard test, please manually configure the test dataset
test_dataloader = val_dataloader
test_evaluator = val_evaluator

evaluation = dict(interval=1, metric='accuracy')

train_cfg = dict(by_epoch=True, max_epochs=max_search_epochs, val_interval=1)
val_cfg = dict()
test_cfg = dict()
custom_hooks = [dict(type='DMCPSubnetHook')]
134 changes: 134 additions & 0 deletions configs/nas/mmcls/autoformer/AUTOFORMER_SUBNET_B.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
backbone.base_embed_dims:
chosen: 64
backbone.blocks.0.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.0.middle_channels:
chosen: 3.5
backbone.blocks.0.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.0.mutable_q_embed_dims:
chosen: 10
backbone.blocks.1.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.1.middle_channels:
chosen: 3.5
backbone.blocks.1.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.1.mutable_q_embed_dims:
chosen: 64
backbone.blocks.10.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.10.middle_channels:
chosen: 4.0
backbone.blocks.10.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.10.mutable_q_embed_dims:
chosen: 64
backbone.blocks.11.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.11.middle_channels:
chosen: 576
backbone.blocks.11.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.11.mutable_q_embed_dims:
chosen: 10
backbone.blocks.12.attn.mutable_attrs.num_heads:
chosen: 9
backbone.blocks.12.middle_channels:
chosen: 4.0
backbone.blocks.12.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.12.mutable_q_embed_dims:
chosen: 9
backbone.blocks.13.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.13.middle_channels:
chosen: 4.0
backbone.blocks.13.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.13.mutable_q_embed_dims:
chosen: 10
backbone.blocks.14.attn.mutable_attrs.num_heads:
chosen: 8
backbone.blocks.14.middle_channels:
chosen: 576
backbone.blocks.14.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.14.mutable_q_embed_dims:
chosen: 8
backbone.blocks.15.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.15.middle_channels:
chosen: 3.0
backbone.blocks.15.mutable_mlp_ratios:
chosen: 3.0
backbone.blocks.15.mutable_q_embed_dims:
chosen: 10
backbone.blocks.2.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.2.middle_channels:
chosen: 576
backbone.blocks.2.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.2.mutable_q_embed_dims:
chosen: 10
backbone.blocks.3.attn.mutable_attrs.num_heads:
chosen: 8
backbone.blocks.3.middle_channels:
chosen: 4.0
backbone.blocks.3.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.3.mutable_q_embed_dims:
chosen: 8
backbone.blocks.4.attn.mutable_attrs.num_heads:
chosen: 10
backbone.blocks.4.middle_channels:
chosen: 576
backbone.blocks.4.mutable_mlp_ratios:
chosen: 3.0
backbone.blocks.4.mutable_q_embed_dims:
chosen: 10
backbone.blocks.5.attn.mutable_attrs.num_heads:
chosen: 9
backbone.blocks.5.middle_channels:
chosen: 3.0
backbone.blocks.5.mutable_mlp_ratios:
chosen: 3.0
backbone.blocks.5.mutable_q_embed_dims:
chosen: 9
backbone.blocks.6.attn.mutable_attrs.num_heads:
chosen: 8
backbone.blocks.6.middle_channels:
chosen: 576
backbone.blocks.6.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.6.mutable_q_embed_dims:
chosen: 8
backbone.blocks.7.attn.mutable_attrs.num_heads:
chosen: 8
backbone.blocks.7.middle_channels:
chosen: 3.5
backbone.blocks.7.mutable_mlp_ratios:
chosen: 3.5
backbone.blocks.7.mutable_q_embed_dims:
chosen: 8
backbone.blocks.8.attn.mutable_attrs.num_heads:
chosen: 9
backbone.blocks.8.middle_channels:
chosen: 576
backbone.blocks.8.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.8.mutable_q_embed_dims:
chosen: 9
backbone.blocks.9.attn.mutable_attrs.num_heads:
chosen: 8
backbone.blocks.9.middle_channels:
chosen: 576
backbone.blocks.9.mutable_mlp_ratios:
chosen: 4.0
backbone.blocks.9.mutable_q_embed_dims:
chosen: 8
backbone.mutable_depth:
chosen: 14
backbone.mutable_embed_dims:
chosen: 576
11 changes: 6 additions & 5 deletions configs/nas/mmcls/autoformer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,15 +44,16 @@ CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
```bash
CUDA_VISIBLE_DEVICES=0 PORT=29500 ./tools/dist_test.sh \
configs/nas/mmcls/autoformer/autoformer_subnet_8xb128_in1k.py \
$STEP2_CKPT 1 --work-dir $WORK_DIR \
--cfg-options algorithm.mutable_cfg=$STEP2_SUBNET_YAML
none 1 --work-dir $WORK_DIR \
--cfg-options model.init_cfg.checkpoint=$STEP1_CKPT model.init_weight_from_supernet=True

```

## Results and models

| Dataset | Supernet | Subnet | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download | Remarks |
| :------: | :------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | :-------: | :------: | :-------: | :-------: | :---------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: |
| ImageNet | vit | [mutable](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/v0.1/nas/spos/spos_shufflenetv2_subnet_8xb128_in1k/spos_shufflenetv2_subnet_8xb128_in1k_flops_0.33M_acc_73.87_20211222-454627be_mutable_cfg.yaml?versionId=CAEQHxiBgICw5b6I7xciIGY5MjVmNWFhY2U5MjQzN2M4NDViYzI2YWRmYWE1YzQx) | 52.472 | 10.2 | 82.48 | 95.99 | [config](./autoformer_supernet_32xb256_in1k.py) | [model](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/x.pth) \| [log](https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmrazor/v0.1/nas/spos/x.log.json) | MMRazor searched |
| Dataset | Supernet | Subnet | Params(M) | Flops(G) | Top-1 (%) | Top-5 (%) | Config | Download | Remarks |
| :------: | :------: | :----------------------------------------------------------------: | :-------: | :------: | :-------: | :-------: | :---------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------: |
| ImageNet | vit | [mutable](./configs/nas/mmcls/autoformer/AUTOFORMER_SUBNET_B.yaml) | 54.319 | 10.57 | 82.47 | 95.99 | [config](./autoformer_supernet_32xb256_in1k.py) | [model](https://download.openmmlab.com/mmrazor/v1/autoformer/autoformer_supernet_32xb256_in1k_20220919_110144-c658ce8f.pth) \| [log](https://download.openmmlab.com/mmrazor/v1/autoformer/autoformer_supernet_32xb256_in1k_20220919_110144-c658ce8f.json) | MMRazor searched |

**Note**:

Expand Down
17 changes: 17 additions & 0 deletions configs/nas/mmcls/autoformer/autoformer_subnet_8xb256_in1k.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
_base_ = 'autoformer_supernet_32xb256_in1k.py'

model = dict(
_scope_='mmrazor',
type='sub_model',
cfg=_base_.supernet,
# NOTE: You can replace the yaml with the mutable_cfg searched by yourself
fix_subnet='configs/nas/mmcls/autoformer/AUTOFORMER_SUBNET_B.yaml',
# You can also load the checkpoint of supernet instead of the specific
# subnet by modifying the `checkpoint`(path) in the following `init_cfg`
# with `init_weight_from_supernet = True`.
init_weight_from_supernet=False,
init_cfg=dict(
type='Pretrained',
checkpoint= # noqa: E251
'https://download.openmmlab.com/mmrazor/v1/autoformer/autoformer_supernet_32xb256_in1k_20220919_110144-c658ce8f.pth', # noqa: E501
prefix='architecture.'))
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@
model = dict(
type='mmrazor.Autoformer',
architecture=supernet,
fix_subnet=None,
mutator=dict(type='mmrazor.NasMutator'))

# runtime setting
Expand Down
16 changes: 8 additions & 8 deletions configs/nas/mmcls/autoslim/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,43 +17,43 @@ Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M

```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
configs/pruning/autoslim/autoslim_mbv2_supernet_8xb256_in1k.py 4 \
configs/nas/autoslim/autoslim_mbv2_1.5x_supernet_8xb256_in1k.py 4 \
--work-dir $WORK_DIR
```

### Search for subnet on the trained supernet

```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
configs/pruning/autoslim/autoslim_mbv2_search_8xb1024_in1k.py 4 \
configs/nas/autoslim/autoslim_mbv2_1.5x_search_8xb256_in1k.py 4 \
--work-dir $WORK_DIR --cfg-options load_from=$STEP1_CKPT
```

### Subnet retraining on ImageNet

```bash
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 ./tools/dist_train.sh \
configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py 4 \
configs/nas/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py 4 \
--work-dir $WORK_DIR \
--cfg-options algorithm.channel_cfg=configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml,configs/pruning/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml,configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml
--cfg-options algorithm.channel_cfg=configs/nas/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml,configs/nas/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml,configs/nas/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml
```

### Split checkpoint

```bash
python ./tools/model_converters/split_checkpoint.py \
configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
configs/nas/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
$RETRAINED_CKPT \
--channel-cfgs configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml
--channel-cfgs configs/nas/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml configs/nas/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml configs/nas/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml
```

### Subnet inference

```bash
CUDA_VISIBLE_DEVICES=0 PORT=29500 ./tools/dist_test.sh \
configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
configs/nas/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
$SEARCHED_CKPT 1 --work-dir $WORK_DIR \
--cfg-options algorithm.channel_cfg=configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml # or modify the config directly
--cfg-options algorithm.channel_cfg=configs/nas/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml # or modify the config directly
```

## Results and models
Expand Down
Loading