Skip to content

Commit

Permalink
Create linkcheck.yml (#1525)
Browse files Browse the repository at this point in the history
* Create linkcheck.yml

* Fix links

* Missed saving state

* .

* Update
  • Loading branch information
mgoin committed Sep 5, 2023
1 parent df570d1 commit 90b664a
Show file tree
Hide file tree
Showing 23 changed files with 80 additions and 50 deletions.
22 changes: 22 additions & 0 deletions .github/workflows/linkcheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Check Markdown links

on:
push:
branches:
- main
pull_request:
branches:
- main

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: gaurav-nelson/github-action-markdown-link-check@v1
with:
use-quiet-mode: 'yes'
config-file: '.github/workflows/mlc_config.json'
13 changes: 13 additions & 0 deletions .github/workflows/mlc-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"ignorePatterns": [
{
"pattern": ".*localhost.*"
},
{
"pattern": ".*127\\.0\\.0\\.1.*"
},
{
"pattern": ".*0\\.0\\.0\\.0.*"
}
]
}
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ sparseml.yolov5.train \
--hyp hyps/hyp.finetune.yaml --cfg yolov5s.yaml --patience 0
```

- Check out the [YOLOv5 CLI example](ultralytics-yolov5/tutorials/sparse-transfer-learning.md) for more details on the YOLOv5 training pipeline
- Check out the [YOLOv5 CLI example](integrations/ultralytics-yolov5/tutorials/sparse-transfer-learning.md) for more details on the YOLOv5 training pipeline
- Check out the [Hugging Face CLI example](integrations/huggingface-transformers/tutorials/sparse-transfer-learning-bert.md) for more details on the available NLP training pipelines
- Check out the [Torchvision CLI example](integrations/torchvision/tutorials/sparse-transfer-learning.md) for more details on the image classification training pipelines

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -302,13 +302,13 @@ The command creates a `./deployment` folder in your local directory, which conta

Take a look at the tutorials for more examples in other use cases:

- [Sparse Transfer with GLUE Datasets (SST2) for sentiment analysis](tutorials/sentiment-analysis/docs-sentiment-analysis-python-sst2.ipynb)
- [Sparse Transfer with Custom Datasets (RottenTomatoes) and Custom Teacher from HF Hub for sentiment analysis](tutorials/sentiment-analysis/docs-sentiment-analysis-python-custom-teacher-rottentomatoes)
- [Sparse Transfer with GLUE Datasets (QQP) for multi-input text classification](tutorials/text-classification/docs-text-classification-python-qqp.ipynb)
- [Sparse Transfer with Custom Datasets (SICK) for multi-input text classification](tutorials/text-classification/docs-text-classification-python-sick.ipynb)
- [Sparse Transfer with Custom Datasets (TweetEval) and Custom Teacher for single input text classificaiton](tutorials/text-classification/docs-text-classification-python-custom-teacher-tweeteval.ipynb)
- [Sparse Transfer with Custom Datasets (GoEmotions) for multi-label text classification](tutorials/text-classification/docs-text-classification-python-multi-label-go_emotions.ipynb)
- [Sparse Transfer with Conll2003 for named entity recognition](tutorials/token-classification/docs-token-classification-python-conll2003.ipynb)
- [Sparse Transfer with Custom Datasets (WNUT) and Custom Teacher for named entity recognition](tutorials/token-classification/docs-token-classification-custom-teacher-wnut.ipynb)
- [Sparse Transfer with GLUE Datasets (SST2) for sentiment analysis](sentiment-analysis/docs-sentiment-analysis-python-sst2.ipynb)
- [Sparse Transfer with Custom Datasets (RottenTomatoes) and Custom Teacher from HF Hub for sentiment analysis](sentiment-analysis/docs-sentiment-analysis-python-custom-teacher-rottentomatoes.ipynb)
- [Sparse Transfer with GLUE Datasets (QQP) for multi-input text classification](text-classification/docs-text-classification-python-qqp.ipynb)
- [Sparse Transfer with Custom Datasets (SICK) for multi-input text classification](text-classification/docs-text-classification-python-sick.ipynb)
- [Sparse Transfer with Custom Datasets (TweetEval) and Custom Teacher for single input text classification](text-classification/docs-text-classification-python-custom-teacher-tweeteval.ipynb)
- [Sparse Transfer with Custom Datasets (GoEmotions) for multi-label text classification](text-classification/docs-text-classification-python-multi-label-go_emotions.ipynb)
- [Sparse Transfer with Conll2003 for named entity recognition](token-classification/docs-token-classification-python-conll2003.ipynb)
- [Sparse Transfer with Custom Datasets (WNUT) and Custom Teacher for named entity recognition](token-classification/docs-token-classification-python-custom-teacher-wnut.ipynb)
- Sparse Transfer with SQuAD (example coming soon!)
- Sparse Transfer with Squadshifts Amazon (example coming soon!)
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ A `deployment` folder is created in your local directory, which has all of the f

## Sparse Transfer Learning with a Custom Dataset (WNUT_17)

Beyond the Conll2003 dataset, we can also use a dataset from the Hugging Face Hub or from local files. Let's try an example of each for the sentiment analysis using [WNUT 17](wnut_17), which is also a NER task.
Beyond the Conll2003 dataset, we can also use a dataset from the Hugging Face Hub or from local files. Let's try an example of each for the sentiment analysis using WNUT_17, which is also a NER task.

For simplicity, we will perform the fine-tuning without distillation. Although the transfer learning recipe contains distillation
modifiers, by setting `--distill_teacher disable` we instruct SparseML to skip distillation.
Expand Down
2 changes: 1 addition & 1 deletion integrations/old-examples/dbolya-yolact/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ The following table lays out the root-level files and folders along with a descr
| Folder/File Name | Description |
|-------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------|
| [recipes](./recipes) | Typical recipes for sparsifying YOLACT models along with any downloaded recipes from the SparseZoo. |
| [yolact](./yolact) | Integration repository folder used to train and sparsify YOLACT models (`setup_integration.sh` must run first). |
| yolact | Integration repository folder used to train and sparsify YOLACT models (`setup_integration.sh` must run first). |
| [README.md](./README.md) | Readme file. |
| [tutorials](./tutorials) | Easy to follow sparsification tutorials for YOLACT models. |

Expand Down
2 changes: 1 addition & 1 deletion integrations/old-examples/keras/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The techniques include, but are not limited to:

## Tutorials

- [Classification](https://github.com/neuralmagic/sparseml/blob/main/integrations/keras/notebooks/classification.ipynb)
- [Classification](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/keras/notebooks/classification.ipynb)

## Installation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Note: The models above were originally trained and sparsified on the [ImageNet](

- After noting respective SparseZoo model stub, [train.py](../train.py) script can be used to download checkpoint and [Imagenette](https://github.com/fastai/imagenette) and kick-start transfer learning.
The transfer learning process itself is guided using recipes; We include example [recipes](../recipes) for classification along with others in the SparseML [GitHub repository](https://github.com/neuralmagic/sparseml).
[Learn more about recipes and modifiers](../../../docs/source/recipes.md).
[Learn more about recipes and modifiers](https://github.com/neuralmagic/sparseml/tree/main/docs/source/recipes.md).

- Run the following example command to kick off transfer learning for [ResNet-50](https://arxiv.org/abs/1512.03385) starting from a moderately pruned checkpoint from [SparseZoo](https://sparsezoo.neuralmagic.com/):
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ pip install "sparseml[torchvision, dev]"
Recipes are YAML or Markdown files that SparseML uses to easily define and control the sparsification of a model.
Recipes consist of a series of `Modifiers` that can influence the training process in different ways. A list of
common modifiers and their uses is provided
[here](../../../docs/source/recipes.md#modifiers-intro).
[here](https://github.com/neuralmagic/sparseml/tree/main/docs/source/recipes.md#modifiers-intro).

SparseML provides a recipe for sparsifying a ResNet-50 model trained on the tiny Imagenette dataset. The recipe can
be viewed in the browser
Expand All @@ -71,7 +71,7 @@ of the parameters list for a single `GMPruningModifier`.

Recipes can integrated into training flows with a couple of lines of code by using a `ScheduledModifierManager`
that wraps the PyTorch `Optimizer` step. An example of how this is done can be found
[here](../../../docs/source/code.md#pytorch-sparsification).
[here](https://github.com/neuralmagic/sparseml/tree/main/docs/source/code.md#pytorch-sparsification).

For this example, we can use the `sparseml.image_classification.train` utility. This utility runs a
PyTorch training flow that is modified by a `ScheduledModifierManager` and takes a recipe as an input.
Expand Down
4 changes: 2 additions & 2 deletions integrations/old-examples/rwightman-timm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ python train.py \
```

Documentation on the original script can be found
[here](https://rwightman.github.io/pytorch-image-models/scripts/).
[here](https://huggingface.co/docs/timm/training_script).
The latest commit hash that `train.py` is based on is included in the docstring.


Expand Down Expand Up @@ -112,5 +112,5 @@ python export.py \
--config ./path/to/checkpoint/args.yaml
```

The DeepSparse Engine [accepts ONNX formats](https://docs.neuralmagic.com/sparseml/source/onnx_export.html) and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
The DeepSparse Engine [accepts ONNX formats](https://docs.neuralmagic.com/archive/sparseml/source/onnx_export.html) and is engineered to significantly speed up inference on CPUs for the sparsified models from this integration.
Examples for loading, benchmarking, and deploying can be found in the [DeepSparse repository here](https://github.com/neuralmagic/deepsparse).
2 changes: 1 addition & 1 deletion integrations/old-examples/tensorflow_v1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ The techniques include, but are not limited to:

## Tutorials

- [Classification](https://github.com/neuralmagic/sparseml/blob/main/integrations/tensorflow_v1/notebooks/classification.ipynb)
- [Classification](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/tensorflow_v1/notebooks/classification.ipynb)

## Installation

Expand Down
4 changes: 2 additions & 2 deletions integrations/old-examples/ultralytics-yolov3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,8 @@ The techniques include, but are not limited to:

## Tutorials

- [Sparsifying YOLOv3 Using Recipes](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/tutorials/sparsifying_yolov3_using_recipes.md)
- [Sparse Transfer Learning With YOLOv3](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/tutorials/yolov3_sparse_transfer_learning.md)
- [Sparsifying YOLOv3 Using Recipes](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/tutorials/sparsifying_yolov3_using_recipes.md)
- [Sparse Transfer Learning With YOLOv3](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/tutorials/yolov3_sparse_transfer_learning.md)

## Installation

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ pruning_modifiers:

This recipe creates a sparse, [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model that achieves 97% recovery of its baseline accuracy on the COCO detection dataset.
Training was done using 4 GPUs at half precision using a total training batch size of 256 with the
[SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3).
[SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3).

When running, adjust hyperparameters based on training environment and dataset.

Expand All @@ -159,7 +159,7 @@ When running, adjust hyperparameters based on training environment and dataset.

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
The contents of the `hyp.pruned.yaml` hyperparameters file is given below.
Adjust the script command for your GPU device setup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ pruning_modifiers:

This recipe creates a sparse [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model in a shortened schedule as compared to the original pruned recipe.
It will train faster, but will recover slightly worse.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3) to run.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3) to run.

When running, adjust hyperparameters based on training environment and dataset.

Expand All @@ -158,7 +158,7 @@ When running, adjust hyperparameters based on training environment and dataset.

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
The contents of the `hyp.pruned.yaml` hyperparameters file is given below.
Adjust the script command for your GPU device setup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ quantization_modifiers:

This recipe creates a sparse-quantized, [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model that achieves 94% recovery of its baseline accuracy on the COCO detection dataset.
Training was done using 4 GPUs at half precision using a total training batch size of 256 with the
[SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3).
[SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3).

When running, adjust hyperparameters based on training environment and dataset.

Expand All @@ -237,7 +237,7 @@ This additionally means that the checkpoints are saved using state_dicts rather

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
The contents of the `hyp.pruned_quantized.yaml` hyperparameters file is given below.
Adjust the script command for your GPU device setup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ quantization_modifiers:

This recipe creates a sparse-quantized [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model in a shortened shcedule as compared to the original pruned recipe.
It will train faster, but will recover slightly worse.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3) to run.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3) to run.

When running, adjust hyperparameters based on training environment and dataset.

Expand All @@ -237,7 +237,7 @@ This additionally means that the checkpoints are saved using state_dicts rather

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
The contents of the `hyp.pruned_quantized.yaml` hyperparameters file is given below.
Adjust the script command for your GPU device setup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ quantization_modifiers:
This is a test recipe useful for quickly evaluating the time and resources needed for pruning and quantizing a model.
In addition, it offers a quick integration tests pathway.
This recipe creates a sparse-quantized [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model that will not be accurate.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3) to run.
Use the following [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3) to run.

Note that half-precision, EMA, and pickling are not supported for quantization.
Therefore, once quantization is run, all three will be disabled for the training pipeline.
Expand All @@ -236,7 +236,7 @@ This additionally means that the checkpoints are saved using state_dicts rather

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
The contents of the `hyp.pruned_quantized.yaml` hyperparameters file is given below.
Adjust the script command for your GPU device setup.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ pruning_modifiers:
This recipe transfer learns from a sparse, [YOLOv3-SPP](https://arxiv.org/abs/1804.02767) model.
It was originally tested on the VOC dataset and achieved 0.84 mAP@0.5.

Training was done using 4 GPUs at half precision with the [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/ultralytics-yolov3).
Training was done using 4 GPUs at half precision with the [SparseML integration with ultralytics/yolov3](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/ultralytics-yolov3).

When running, adjust hyperparameters based on training environment and dataset.

Expand All @@ -142,7 +142,7 @@ The training results for this recipe are made available through Weights and Bias

## Training

To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/ultralytics-yolov3/README.md).
To set up the training environment, follow the instructions on the [integration README](https://github.com/neuralmagic/sparseml/blob/main/integrations/old-examples/ultralytics-yolov3/README.md).
Using the given training script from the `yolov3` directory the following command can be used to launch this recipe.
Adjust the script command for your GPU device setup.
Ultralytics supports both DataParallel and DDP.
Expand Down
Loading

0 comments on commit 90b664a

Please sign in to comment.