Skip to content

Commit

Permalink
Update cli-api-guide.md
Browse files Browse the repository at this point in the history
grammar nits
  • Loading branch information
jeanniefinks committed Jul 13, 2023
1 parent 34dedf4 commit e543448
Showing 1 changed file with 14 additions and 15 deletions.
29 changes: 14 additions & 15 deletions docs/cli-api-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,20 +19,19 @@ limitations under the License.

The Sparsify CLI/API is a Python package that allows you to run Sparsify Experiments locally, sync with the Sparsify Cloud, and integrate into your own workflows.

## Install Sparsify
## Installing Sparsify

Next, you'll need to install Sparsify on your training hardware.
To do this, run the following command:
Next, install Sparsify on your training hardware by running the following command:

```bash
pip install sparsify-nightly
```

For more details and system/hardware requirements, see the [Installation](https://github.com/neuralmagic/sparsify#installation) section.

## Login to Sparsify
## Logging in to Sparsify

With Sparsify installed on your training hardware, you'll need to authorize the local CLI to access your account.
With Sparsify installed on your training hardware, you will need to authorize the local CLI to access your account.
This is done by running the `sparsify.login` command and providing your API key.
Locate your API key on the home page of the [Sparsify Cloud](https://apps.neuralmagic.com/sparsify) under the **'Get set up'** modal.
Once you have located this, copy the command or the API key itself and run the following command:
Expand All @@ -41,19 +40,19 @@ Once you have located this, copy the command or the API key itself and run the f
sparsify.login API_KEY
````

The `sparsify.login API_KEY` command is used to sync your local training environment with the Sparsify Cloud in order to keep track of your Experiments. Once you run the `sparsify.login API_KEY` command, you should see a confirmation via the console that you are logged into Sparsify. To log out of Sparsify, use the `exit` command.
The `sparsify.login API_KEY` command is used to sync your local training environment with the Sparsify Cloud in order to keep track of your Experiments. Once you run the `sparsify.login API_KEY` command, you should see a confirmation via the console that you are logged in to Sparsify. To log out of Sparsify, use the `exit` command.

If you encounter any issues with your API key, reach out to the team via the [nm-sparsify Slack Channel](https://join.slack.com/t/discuss-neuralmagic/shared_invite/zt-1xkdlzwv9-2rvS6yQcCs7VDNUcWxctnw), [email](mailto::rob@neuralmagic.com) or via [GitHub Issues](https://github.com/neuralmagic/sparsify/issues).


## Run an Experiment
## Running an Experiment

Experiments are the core of sparsifying a model.
They are the process of applying sparsification algorithms in One-Shot, Training-Aware, or Sparse-Transfer to a dataset and model.

All Experiments are run locally on your training hardware and can be synced with the cloud for further analysis and comparison.
All Experiments are run locally on your training hardware and can be synced with Sparsify Cloud for further analysis and comparison.

To run an Experiment, you can use either the CLI or the API depending on your use case.
To run an Experiment, you can use either the CLI or the API, depending on your use case.
The Sparsify Cloud provides a UI for exploring hyperparameters, predicting performance, and generating the desired CLI/API command.

The general command for running an Experiment is:
Expand All @@ -65,11 +64,11 @@ sparsify.run EXPERIMENT_TYPE --use-case USE_CASE --model MODEL --data DATA --opt
Where the values for each of the arguments follow these general rules:
- EXPERIMENT_TYPE: one of `one-shot`, `training-aware`, or `sparse-transfer`.

- USE_CASE: the use case you're solving for such as `image-classification`, `object-detection`, `text-classification`, a custom use case, etc. A full list of supported use cases for each Experiment type can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/use-cases-guide.md).
- USE_CASE: the use case you're solving for, such as `image-classification`, `object-detection`, `text-classification`, a custom use case, etc. A full list of supported use cases for each Experiment type can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/use-cases-guide.md).
- MODEL: the model you want to sparsify which can be a model name such as `resnet50`, a stub from the [SparseZoo](https://sparsezoo.neuralmagic.com), or a path to a local model. For One-Shot, currently the model must be in an ONNX format. For Training-Aware and Sparse-Transfer, the model must be in a PyTorch format. More details on model formats can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/models-guide.md).
- MODEL: the model you want to sparsify which can be a model name such as `resnet50`, a stub from the [SparseZoo](https://sparsezoo.neuralmagic.com), or a path to a local model. For One-Shot, currently, the model must be in an ONNX format. For Training-Aware and Sparse-Transfer, the model must be in a PyTorch format. More details on model formats can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/models-guide.md).
- DATA: the dataset you want to use to the sparsify the model. This can be a dataset name such as `imagenette` or a path to a local dataset. Currently, One-Shot only supports NPZ formatted datasets. Training-Aware and Sparse-Transfer support PyTorch ImageFolder datasets for image classification, YOLOv5/v8 datasets for object detection and segmentation, and HuggingFace datasets for NLP/NLG. More details on dataset formats can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/datasets-guide.md).
- DATA: the dataset you want to use to sparsify the model. This can be a dataset name such as `imagenette` or a path to a local dataset. Currently, One-Shot only supports NPZ formatted datasets. Training-Aware and Sparse-Transfer support PyTorch ImageFolder datasets for image classification, YOLOv5/v8 datasets for object detection and segmentation, and Hugging Face datasets for NLP/NLG. More details on dataset formats can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/datasets-guide.md).
- OPTIM_LEVEL: the desired sparsification level from 0 (none) to 1 (max). The general rule is that 0 is the baseline model, <0.3 only quantizes the model, 0.3-1.0 increases the sparsity of the model and applies quantization. More details on sparsification levels can be found [here](https://github.com/neuralmagic/sparsify/blob/main/docs/optim-levels-guide.md).
Expand All @@ -82,10 +81,10 @@ Where the values for each of the arguments follow these general rules:
| **++** | **+++++** | **+++** |
One-Shot Experiments are the quickest way to create a faster and smaller version of your model.
The algorithms are applied to the model post training utilizing a calibration dataset, so they result in no further training time and much faster sparsification times compared with Training-Aware Experiments.
The algorithms are applied to the model post-training, utilizing a calibration dataset, so they result in no further training time and much faster sparsification times compared with Training-Aware Experiments.
Generally, One-Shot Experiments result in a 3-5x speedup with minimal accuracy loss.
They are ideal for when you want to quickly sparsify your model and don't have a lot of time to spend on the sparsification process.
They are ideal for when you want to quickly sparsify your model and have limited time to spend on the sparsification process.
CV Example:
```bash
Expand Down Expand Up @@ -149,7 +148,7 @@ sparsify.run training-aware --use-case text_classification --model bert-base --d
Landing Soon!
## Compare the Experiment results
## Comparing the Experiment results
Once you have run your Experiment, you can compare the results printed out to the console using the `deepsparse.benchmark` command.
In the near future, you will be able to compare the results in the Cloud, measure other scenarios, and compare the results to other Experiments.
Expand Down

0 comments on commit e543448

Please sign in to comment.