Pruning RNNs (University of Waterloo CS898 Deep Learning Spring 2017 Course Project)
-
Updated
Aug 15, 2017 - Python
Pruning RNNs (University of Waterloo CS898 Deep Learning Spring 2017 Course Project)
reproduce 《Distilling the Knowledge in a Neural Network》
PyTorch Implementation of Weights Pruning
Reduce the model complexity by 612 times, and memory footprint by 19.5 times compared to base model, while achieving worst case accuracy threshold.
Deep Learning Compression and Acceleration SDK -- deep model compression for Edge and IoT embedded systems, and deep model acceleration for clouds and private servers
A directory with some interesting research paper summaries in the field of Deep Learning
Caffe/Neon prototxt training file for our Neurocomputing2017 work: Fuzzy Quantitative Deep Compression Network
pytorch implementation of Structured Bayesian Pruning
Model compression methods applied to the monocular depth estimation network by Godard et al.
Neural Network Compression
Deep Face Model Compression
In this project I have tried to predict if an employee is churning by setting extensive machine learning pipeline.
Code for paper - Experience Loss in PyTorch.
TensorFlow implementation of weight and unit pruning and sparsification
Code for “Discrimination-aware-Channel-Pruning-for-Deep-Neural-Networks”
model-compression-and-acceleration-4-DNN
pytorch realtime multi person keypoint estimation
Optimizing Deep Convolutional Neural Network with Ternarized Weights and High Accuracy
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
Add a description, image, and links to the model-compression topic page so that developers can more easily learn about it.
To associate your repository with the model-compression topic, visit your repo's landing page and select "manage topics."