Skip to content

Latest commit

 

History

History
43 lines (25 loc) · 1.5 KB

File metadata and controls

43 lines (25 loc) · 1.5 KB

Object Detection Knowledge Distillation(ODKD)

version coverage

The function of this branch is not complete. For ssd and yolov5 distillation, checking other branches.

Release edition is coming Soon...

Update

  1. The first edition is the refactor of branch mbv2-lite, which is an implementation of Chen, G. et al. (2017) ‘Learning efficient object detection models with knowledge distillation’ with SSD-lite structure.

  2. Replace part of code with pytorch api which has same functionality.

  3. Very friendly beginner guidance.

  4. System Architecture

odkd

Useage

$ python setup.py install --user

$ odkd-train ./training_config.yml -t

$ odkd-train training_config.yml
or
$ python -m torch.distributed.launch --nproc_per_node=2 `which odkd-train` training_config.yml

$ odkd-eval ${CHECKPOINTS_PATH}/${RUN_INDEX}/config.yml

TODO

  • Evaluation Module

  • LOG Module

  • Coco dataset support

  • Yolov5 distillation