Skip to content

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Notifications You must be signed in to change notification settings

jakc4103/scale-adjusted-training

Repository files navigation

scale-adjusted-training

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Introduction

This repo implement the Scale-Adjusted Training from Towards Efficient Training for Neural Network Quantization including:

  1. Constant rescaling Dorefa-quantize
  2. Calibrated gradient PACT

TODO

  • constant rescaling DoReFaQuantize layer
  • CGPACT layer
  • test with mobilenetv1
  • test with mobilenetv2
  • test with resnet50

Acknowledgement

About

PyTorch implementation of Towards Efficient Training for Neural Network Quantization

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages