GAIA-dets provides the following tools:
- Train supernet: Build up a search space and train a supernet on large amount of data.
- Test supernet: Define sampling rules and test subnets sampled from supernet based on rules.
- Finetune supernet: Define sampling rules and finetune subnets sampled from supernet based on rules.
- Count flops: Count flops of subnets in search space.
- Extract subnets: Extract weights of subnets from supernets.
To begin with, you need to get a powerful supernet to do all things:
- You can build up your own search space and train supernet on yourself.
- Or, you can use arch and ckpt(coming soon) of supernet in this repo.
Then you need to maintain overhead of subnets:
- You can count flops and params of subnets from your customized search space.
- You can use flops.json that records infomation of arch in this repo.
During downstream customization:
- You could design rules and directly sample subnets for fast-finetuning. This would generate a file that records performance of subnets. Direct finetuning is usually applied when downstream label space is a subset of upstream label space.
- You could also design rules and directly sample subnets for testing. This could shrink the search space and you could apply fast-finetuning based on rules like this.
Final model training and extraction:
- You could directly use the best-performed fast-finetuned model.
- Or, you could extract weights of the best-performed arch from supernet, and finetune it with your own tricks(like DCN, Cascaded RCNN and etc.)
cd /path/to/GAIA-det
sh scripts/train_local.sh 8 configs/local_examples/train_supernet/faster_rcnn_ar50to101_gsync.py /path/to/work_dir
cd /path/to/GAIA-det
sh scripts/count_flops_local.sh 8 configs/local_examples/train_supernet/faster_rcnn_ar50to101_flops.py /path/to/work_dir
After this, you may have a /path/to/work_dir/flops.json
which records flops of each model. It looks like this:
{"overhead": {"flops": 46314197434.0, "params": 28010543}, "arch": {"backbone": {"stem": {"width": 32}, "body": {"width": [48, 96, 192, 384], "depth": [2, 2, 5, 2]}}}, "data": {"input_shape": 480}}
{"overhead": {"flops": 47443272634.0, "params": 33024047}, "arch": {"backbone": {"stem": {"width": 32}, "body": {"width": [48, 96, 192, 384], "depth": [2, 2, 5, 4]}}}, "data": {"input_shape": 480}}
...
cd /path/to/GAIA-det
sh scripts/finetune_local.sh $NUM_GPUS $CONFIG $WORK_DIR $SUPERNET_CKPT $FLOPS_JSON
Or, you could test subnets to shrink the search space:
cd /path/to/GAIA-det
sh scripts/test_local.sh $NUM_GPUS $CONFIG $WORK_DIR $SUPERNET_CKPT $FLOPS_JSON
and then apply fast-finetune, with TEST_JSON=/path/to/work_dir/test_supernet/metric.json
:
cd /path/to/GAIA-det
sh scripts/finetune_local.sh $NUM_GPUS $CONFIG $WORK_DIR $SUPERNET_CKPT $TEST_JSON
cd /path/to/GAIA-det
sh scripts/extract_subnet.sh $NUM_GPUS $CONFIG $WORK_DIR $SUPERNET_CKPT
Coming soon.