Skip to content

1. set weights < 0.01 to 0; 2. pruning output feature maps according to BN scales (darknet prune)

License

Unknown and 6 other licenses found

Licenses found

Unknown
LICENSE
WTFPL
LICENSE.fuck
Unknown
LICENSE.gen
GPL-3.0
LICENSE.gpl
Unknown
LICENSE.meta
MIT
LICENSE.mit
Unknown
LICENSE.v1
Notifications You must be signed in to change notification settings

wilxy/yolo_embedded_acceleration

 
 

yolo_prune

  1. set weights < 0.01 to 0 (set PRUNE = 1 in makefile);

  2. pruning output feature maps according to BN scales (set SCALE_L1 = 1 in makefile)

  3. conv connection sparsification by using a mask of 0 and 1 (set MASK = 1 in makefile)

  4. parallel computation (set MULTI_CORE = 1 in makefile)

The first function is based on the work of https://github.com/hjimce/compress_yolo

The second function is pruning feature maps according to BN scales parameter,

and the regularization method is L1 NORM

test result is:
YOLO2
Before Runtime: 15s, mAP: 0.62
After Runtime: 2s, mAP: 0.57
when I test tiny-yolo, the runtime can reduced to < 1s in CPU

If you want to use my code, please let me know

About

1. set weights < 0.01 to 0; 2. pruning output feature maps according to BN scales (darknet prune)

Resources

License

Unknown and 6 other licenses found

Licenses found

Unknown
LICENSE
WTFPL
LICENSE.fuck
Unknown
LICENSE.gen
GPL-3.0
LICENSE.gpl
Unknown
LICENSE.meta
MIT
LICENSE.mit
Unknown
LICENSE.v1

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C 87.3%
  • Cuda 10.9%
  • Python 0.8%
  • Makefile 0.4%
  • C++ 0.3%
  • Shell 0.2%
  • CMake 0.1%