Skip to content

Commit

Permalink
Rebase release/0.1 off of main for 0.1.1 (#87)
Browse files Browse the repository at this point in the history
* GA code, toctree links (#61)

- added tracking for docs output
- added help links for docs output

* Update README.md (#64)

removed placeholder reference to comingsoon repo in favor of active repo

* add decorator for flaky tf sparsity tests (#65)

* enable modifier groups in SparseML recipes (#66)

* enable modifier groups in SparseML recipes
* unit tests for YAML modifier list loading

* make build argument for nightly builds (#63)

* rst url syntax correction (#67)

correcting double slash at the end of URLs with updates to index.rst pre-compilation

* match types explicitly in torch qat quant observer wrappers (#68)

* docs updates (#71)

enhancing left nav for Help; after this merge, the docs need to be rebuilt for this repo so docs.neuralmagic.com can be refreshed. cc @markurtz

* Rename KSModifier to PruningModifier (#76)

* Rename ConstantKSModifier to ConstantPruningModifier

* Rename GradualKSModifier to GMPruningModifier

* Fix broken link for Optimization Recipes (#75)

* Serialize/deserialize MaskedLayer (#69)

* Serialize/deserialize MaskedLayer

* Remove unused symbols

* Register pruning scheduler classes for serialization

* removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77)

* load recipes directly from SparseZoo (#72)

* load recipes into Managers from sparsezoo stubs
* moving recipe_type handling to SparseZoo only, supporting loading SparseZoo recipe objects

* Revert "removed ScheduledOptimizer, moved logic to ScheduledModifierManager (#77)" (#80)

This reverts commit 6073abb.

* Update for 0.1.1 release (#82)

* Update for 0.1.1 release
- update python version to 0.1.1
- setup.py add in version parts and _VERSION_MAJOR_MINOR for more flexibility with dependencies between neural magic packages
- add in deepsparse optional install pathway

* missed updating version to 0.1.1

* rename examples directory to integrations (#78)

* rwightman/pytorch-image-models integration (#70)

* load checkpoint file based on sparsezoo recipe in pytorch_vision script (#83)

* ultralytics/yolov5 integration (#73)

* pytorch sparse quantized transfer learning notebook (#81)

* load qat onnx models for conversion from file path (#86)

* Sparsification update (#84)

* Sparsification update
- update sparsification descriptions and move to preferred verbage

* update from comments on deepsparse for sparsification

* Update README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update README.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update docs/source/index.rst

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update docs/source/index.rst

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update docs/source/index.rst

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update docs/source/index.rst

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update docs/source/recipes.md

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* fix links in index.rst from reviewed content

* update overviews and taglines from doc

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* blog style readme for torch sparse-quant TL notebook (#85)

Co-authored-by: Jeannie Finks (NM) <74554921+jeanniefinks@users.noreply.github.com>
Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
Co-authored-by: Eldar Kurtic <eldar.ciki@gmail.com>
Co-authored-by: Tuan Nguyen <tuan@neuralmagic.com>
  • Loading branch information
5 people committed Feb 25, 2021
1 parent 7c24b40 commit edcce80
Show file tree
Hide file tree
Showing 47 changed files with 3,231 additions and 202 deletions.
11 changes: 6 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
.PHONY: build docs test

BUILDDIR := $(PWD)
CHECKDIRS := examples notebooks scripts src tests utils setup.py
CHECKGLOBS := 'examples/**/*.py' 'scripts/**/*.py' 'src/**/*.py' 'tests/**/*.py' 'utils/**/*.py' setup.py
CHECKDIRS := examples integrations notebooks scripts src tests utils setup.py
CHECKGLOBS := 'examples/**/*.py' 'integrations/**/*.py' 'scripts/**/*.py' 'src/**/*.py' 'tests/**/*.py' 'utils/**/*.py' setup.py
DOCDIR := docs
MDCHECKGLOBS := 'docs/**/*.md' 'docs/**/*.rst' 'examples/**/*.md' 'notebooks/**/*.md' 'scripts/**/*.md'
MDCHECKGLOBS := 'docs/**/*.md' 'docs/**/*.rst' 'examples/**/*.md' 'integrations/**/*.md' 'notebooks/**/*.md' 'scripts/**/*.md'
MDCHECKFILES := CODE_OF_CONDUCT.md CONTRIBUTING.md DEVELOPING.md README.md

TARGETS := "" # targets for running pytests: keras,onnx,pytorch,pytorch_models,pytorch_datasets,tensorflow_v1,tensorflow_v1_datasets
BUILD_ARGS := # set nightly to build nightly release
TARGETS := "" # targets for running pytests: keras,onnx,pytorch,pytorch_models,pytorch_datasets,tensorflow_v1,tensorflow_v1_models,tensorflow_v1_datasets
PYTEST_ARGS := ""
ifneq ($(findstring keras,$(TARGETS)),keras)
PYTEST_ARGS := $(PYTEST_ARGS) --ignore tests/sparseml/keras
Expand Down Expand Up @@ -63,7 +64,7 @@ docs:

# creates wheel file
build:
python3 setup.py sdist bdist_wheel
python3 setup.py sdist bdist_wheel $(BUILD_ARGS)

# clean package
clean:
Expand Down
72 changes: 44 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ limitations under the License.

# ![icon for SparseMl](https://github.com/raw/neuralmagic/sparseml/main/docs/source/icon-sparseml.png) SparseML

### Libraries for state-of-the-art deep neural network optimization algorithms, enabling simple pipelines integration with a few lines of code
### Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

<p>
<a href="https://github.com/neuralmagic/sparseml/blob/main/LICENSE">
<img alt="GitHub" src="https://img.shields.io/github/license/neuralmagic/comingsoon.svg?color=purple&style=for-the-badge" height=25>
<img alt="GitHub" src="https://img.shields.io/github/license/neuralmagic/sparseml.svg?color=purple&style=for-the-badge" height=25>
</a>
<a href="https://docs.neuralmagic.com/sparseml/">
<img alt="Documentation" src="https://img.shields.io/website/http/docs.neuralmagic.com/sparseml/index.html.svg?down_color=red&down_message=offline&up_message=online&style=for-the-badge" height=25>
Expand All @@ -44,25 +44,37 @@ limitations under the License.

## Overview

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art optimization algorithms such as [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to any neural network. General, recipe-driven approaches built around these optimizations enable the simplification of creating faster and smaller models for the ML performance community at large.
SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network.
General, recipe-driven approaches built around these algorithms enable the simplification of creating faster and smaller models for the ML performance community at large.

SparseML is integrated for easy model optimizations within the [PyTorch](https://pytorch.org/),
[Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems currently.
This repository contains integrations within the [PyTorch](https://pytorch.org/), [Keras](https://keras.io/), and [TensorFlow V1](http://tensorflow.org/) ecosystems, allowing for seamless model sparsification.

### Related Products
## Sparsification

- [DeepSparse](https://github.com/neuralmagic/deepsparse): CPU inference engine that delivers unprecedented performance for sparse models
- [SparseZoo](https://github.com/neuralmagic/sparsezoo): Neural network model repository for highly sparse models and optimization recipes
- [Sparsify](https://github.com/neuralmagic/sparsify): Easy-to-use autoML interface to optimize deep neural networks for better inference performance and a smaller footprint
Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
For example, pruning plus quantization can give over [7x improvements in performance](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) while recovering to nearly the same baseline accuracy.

The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
- Download a sparsification recipe and sparsified model from the [SparseZoo](https://github.com/neuralmagic/sparsezoo).
- Alternatively, create a recipe for your model using [Sparsify](https://github.com/neuralmagic/sparsify).
- Apply your recipe with only a few lines of code using [SparseML](https://github.com/neuralmagic/sparseml).
- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the [DeepSparse Engine](https://github.com/neuralmagic/deepsparse).


**Full Deep Sparse product flow:**

<img src="https://docs.neuralmagic.com/docs/source/sparsification/flow-overview.svg" width="960px">

## Quick Tour

To enable flexibility, ease of use, and repeatability, optimizing a model is generally done using a recipe file.
The files encode the instructions needed for modifying the model and/or training process as a list of modifiers.
To enable flexibility, ease of use, and repeatability, sparsifying a model is generally done using a recipe.
The recipes encode the instructions needed for modifying the model and/or training process as a list of modifiers.
Example modifiers can be anything from setting the learning rate for the optimizer to gradual magnitude pruning.
The files are written in [YAML](https://yaml.org/) and stored in YAML or [markdown](https://www.markdownguide.org/) files using [YAML front matter](https://assemble.io/docs/YAML-front-matter.html).
The rest of the SparseML system is coded to parse the recipe files into a native format for the desired framework
and apply the modifications to the model and training pipeline.
The rest of the SparseML system is coded to parse the recipes into a native format for the desired framework and apply the modifications to the model and training pipeline.

A sample recipe for pruning a model generally looks like the following:

Expand Down Expand Up @@ -91,18 +103,21 @@ modifiers:
params: ['sections.0.0.conv1.weight', 'sections.0.0.conv2.weight', 'sections.0.0.conv3.weight']
```

More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/optimization-recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.
More information on the available recipes, formats, and arguments can be found [here](https://github.com/neuralmagic/sparseml/blob/main/docs/source/recipes.md). Additionally, all code implementations of the modifiers under the `optim` packages for the frameworks are documented with example YAML formats.

Pre-configured recipes and the resulting models can be explored and downloaded from the [SparseZoo](https://github.com/neuralmagic/sparsezoo). Also, [Sparsify](https://github.com/neuralmagic/sparsify) enables autoML style creation of optimization recipes for use with SparseML.

For a more in-depth read, check out [SparseML documentation](https://docs.neuralmagic.com/sparseml/).

### PyTorch Optimization
### PyTorch Sparsification

The PyTorch optimization libraries are located under the `sparseml.pytorch.optim` package.
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into PyTorch training pipelines.
The PyTorch sparsification libraries are located under the `sparseml.pytorch.optim` package.
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into PyTorch training pipelines.

The integration is done using the `ScheduledOptimizer` class. It is intended to wrap your current optimizer and its step function. The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file. With this setup, the training process can then be modified as desired to optimize the model.
The integration is done using the `ScheduledOptimizer` class.
It is intended to wrap your current optimizer and its step function.
The step function then calls into the `ScheduledModifierManager` class which can be created from a recipe file.
With this setup, the training process can then be modified as desired to sparsify the model.

To enable all of this, the integration code you'll need to write is only a handful of lines:

Expand All @@ -121,11 +136,11 @@ optimizer = ScheduledOptimizer(optimizer, model, manager, steps_per_epoch=num_tr

### Keras Optimization

The Keras optimization libraries are located under the `sparseml.keras.optim` package.
Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into Keras training pipelines.
The Keras sparsification libraries are located under the `sparseml.keras.optim` package.
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into Keras training pipelines.

The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
This class handles modifying the Keras objects for the desired optimizations using the `modify` method.
This class handles modifying the Keras objects for the desired algorithms using the `modify` method.
The edited model, optimizer, and any callbacks necessary to modify the training process are returned.
The model and optimizer can be used normally and the callbacks must be passed into the `fit` or `fit_generator` function.
If using `train_on_batch`, the callbacks must be invoked after each call.
Expand Down Expand Up @@ -155,13 +170,14 @@ model.fit(..., callbacks=callbacks)
save_model = manager.finalize(model)
```

### TensorFlow V1 Optimization
### TensorFlow V1 Sparsification

The TensorFlow optimization libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package. Inside are APIs designed to make model optimization as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.
The TensorFlow sparsification libraries for TensorFlow version 1.X are located under the `sparseml.tensorflow_v1.optim` package.
Inside are APIs designed to make model sparsification as easy as possible by integrating seamlessly into TensorFlow V1 training pipelines.

The integration is done using the `ScheduledModifierManager` class which can be created from a recipe file.
This class handles modifying the TensorFlow graph for the desired optimizations.
With this setup, the training process can then be modified as desired to optimize the model.
This class handles modifying the TensorFlow graph for the desired algorithms.
With this setup, the training process can then be modified as desired to sparsify the model.

#### Estimator-Based pipelines

Expand All @@ -185,7 +201,7 @@ manager.modify_estimator(estimator, steps_per_epoch=num_train_batches)
Session-based pipelines need a little bit more as compared to estimator-based pipelines; however,
it is still designed to require only a few lines of code for integration.
After graph creation, the manager's `create_ops` method must be called.
This will modify the graph as needed for the optimizations and return modifying ops and extras.
This will modify the graph as needed for the algorithms and return modifying ops and extras.
After creating the session and training normally, call into `session.run` with the modifying ops after each step.
Modifying extras contain objects such as tensorboard summaries of the modifiers to be used if desired.
Finally, once completed, `complete_graph` must be called to remove the modifying ops for saving and export.
Expand Down Expand Up @@ -289,7 +305,7 @@ Install with pip using:
pip install sparseml
```

Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [examples](https://github.com/neuralmagic/sparseml/blob/main/examples/)
Then if you would like to explore any of the [scripts](https://github.com/neuralmagic/sparseml/blob/main/scripts/), [notebooks](https://github.com/neuralmagic/sparseml/blob/main/notebooks/), or [integrations](https://github.com/neuralmagic/sparseml/blob/main/integrations/)
clone the repository and install any additional dependencies as required.

#### Supported Framework Versions
Expand Down Expand Up @@ -343,7 +359,7 @@ Note, TensorFlow V1 is no longer being built for newer operating systems such as

## Contributing

We appreciate contributions to the code, examples, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).
We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! [Learn how here](https://github.com/neuralmagic/sparseml/blob/main/CONTRIBUTING.md).

## Join the Community

Expand Down
5 changes: 5 additions & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,11 @@
html_theme = "sphinx_rtd_theme"
html_logo = "icon-sparseml.png"

html_theme_options = {
'analytics_id': 'UA-128364174-1', # Provided by Google in your dashboard
'analytics_anonymize_ip': False,
}

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
Expand Down
Loading

0 comments on commit edcce80

Please sign in to comment.