Skip to content

Commit

Permalink
Merge branch 'master' into feature/SG-1046-improve-docs-on-variable_s…
Browse files Browse the repository at this point in the history
…etup
  • Loading branch information
BloodAxe committed Aug 28, 2023
2 parents 65a4496 + fd87ce0 commit e9283c9
Show file tree
Hide file tree
Showing 8 changed files with 369 additions and 310 deletions.
2 changes: 1 addition & 1 deletion documentation/source/QuickstartBasicToolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ plt.imshow(image)
export PYTHONPATH=$PYTHONPATH:<YOUR-LOCAL-PATH>/super-gradients/
```

1. Launch one of SG's <a href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/Training_Recipes.md">training recipes</a>. For example, Resnet18 on Cifar10:
1. Launch one of SG's <a href="https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes">training recipes</a>. For example, Resnet18 on Cifar10:

```shell
python -m super_gradients.train_from_recipe --config-name=cifar10_resnet experiment_name=my_resnet18_cifar10_experiment
Expand Down
160 changes: 160 additions & 0 deletions documentation/source/Recipes_Custom.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
## Training on Custom Recipes


Prerequisites:
- [Introduction to Configuration Files](configuration_files.md)
- [Introduction to Training Recipes](Recipes_Training.md)
- [Working with Factories](Recipes_Factories.md)


In this section, we will assume that you want to build you own recipe, and to train a model based on that recipe.

We will cover 2 different approaches in writing your recipe.
1. **SuperGradients Format** - you stick to the format used in SuperGradients.
2. **Custom Format** - you organize recipes the way you want.


### 1. SuperGradient Format
This approach is most appropriate when you want to quickly get started.

Since you will be following all SuperGradients convention when building the recipe,
you won't have to worry about working with hydra to instantiate your objects and to launch a training; SuperGradients already provides a script that will do it for you.

**How to get started?**

1. We recommend that you would go through the [pre-defined recipes](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/)
and chose the one which seems most similar to your use case. Make sure it covers the same task as you.
2. Copy it to a folder that will be exclusively meant for recipes, inside your project.
3. Override the required parameters to fit your needs. Make sure to keep the same structure. Think about [registering custom objects](Recipes_Factories.md) if you need.
4. Copy [train_from_recipe script](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/train_from_recipe.py) to your project (see below), but think to override `<config-path>` with the path to your recipe folder.


```python
# The code below is the same as the `train_from_recipe.py` script
# See: https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/train_from_recipe.py
import hydra
from omegaconf import DictConfig
from super_gradients import Trainer, init_trainer

@hydra.main(config_path="<config-path>", version_base="1.2") # TODO: overwrite `<config-path>`
def _main(cfg: DictConfig) -> None:
Trainer.train_from_config(cfg)

def main() -> None:
init_trainer() # `init_trainer` needs to be called before `@hydra.main`
_main()

if __name__ == "__main__":
main()
```


### 2. Customizing Recipe Format

With this approach, you will have much more freedom in the way you organize your recipe but this will come at the cost of writing code!
This is mainly recommended for specific use-cases which are not properly covered with the previous approach.

Despite not being required with this approach, we strongly recommend for you to use the same format as in
SuperGradients as it would allow you to build on top of pre-defined recipes.


**What are the recipe format constraints here ?**

With this approach, you will still need to follow certain conventions
- `training_hyperparams` should include the same required fields as with the previous approach. You can find the list [here](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/training_hyperparams/default_train_params.yaml).
- The config passed to `dataloaders.get` should still be compatible to dataset/dataloader you want to load.

Basically, the format constraints that you will face with this approach are the same as these that you would face when working exclusively with python.


**How to launch a training ?**

Similarly to the previous approach, you will need a script that will launch the training.
The difference being that here you won't be using `Trainer.train_from_config`. Instead, you will to isntantiate all the required objects in your script.

Here is an example of how such a script could look like:
```python
import hydra
from omegaconf import DictConfig

from super_gradients import Trainer, init_trainer, setup_device
from super_gradients.training import dataloaders, models

@hydra.main(config_path="<config-path>", version_base="1.2") # TODO: overwrite `<config-path>`
def _main(cfg: DictConfig) -> None:
setup_device(
device=cfg.device,
multi_gpu=cfg.multi_gpu,
num_gpus=cfg.num_gpus,
)

# INSTANTIATE ALL OBJECTS IN CFG
cfg = hydra.utils.instantiate(cfg)

trainer = Trainer(experiment_name=cfg.experiment_name, ckpt_root_dir=cfg.ckpt_root_dir)

# BUILD NETWORK
model = models.get(
model_name=cfg.architecture,
num_classes=cfg.arch_params.num_classes,
arch_params=cfg.arch_params,
strict_load=cfg.checkpoint_params.strict_load,
pretrained_weights=cfg.checkpoint_params.pretrained_weights,
checkpoint_path=cfg.checkpoint_params.checkpoint_path,
load_backbone=cfg.checkpoint_params.load_backbone,
)

# INSTANTIATE DATA LOADERS
train_dataloader = dataloaders.get(
name=cfg.train_dataloader,
dataset_params=cfg.dataset_params.train_dataset_params,
dataloader_params=cfg.dataset_params.train_dataloader_params,
)

val_dataloader = dataloaders.get(
name=cfg.val_dataloader,
dataset_params=cfg.dataset_params.val_dataset_params,
dataloader_params=cfg.dataset_params.val_dataloader_params,
)

# TRAIN
results = trainer.train(
model=model,
train_loader=train_dataloader,
valid_loader=val_dataloader,
training_params=cfg.training_hyperparams,
additional_configs_to_log={},
)
print(results)


def main() -> None:
init_trainer() # `init_trainer` needs to be called before `@hydra.main`
_main()

if __name__ == "__main__":
main()
```


## Tips

### Building on top of SuperGradients Recipes
By default, `defaults` only works with recipes that are defined in the same recipe directory, but this can be extended to other directories.

In our case, this comes handy when you want to build on top of recipes that were implemented in SuperGradients.

#### Example

Using `default_train_params` defined in [super_gradients/recipes/training_hyperparams/default_train_params.yaml](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/training_hyperparams/default_train_params.yaml)

```yaml
defaults:
- training_hyperparams: default_train_params

hydra:
searchpath:
- pkg://super_gradients.recipes

... # Continue with your recipe
```
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ Factories in SuperGradients provide a powerful and concise way to instantiate ob

Prerequisites:
- [Training with Configuration Files](configuration_files.md)
- [Introduction to Training Recipes](Recipes_Training.md)

In this tutorial, we'll cover how to use existing factories, register new ones, and briefly explore the implementation details.

Expand Down Expand Up @@ -142,7 +143,7 @@ factory = TransformsFactory()
my_transform = factory.get({'MyTransformName': {'prob': 0.7}})
```
You may recognize that the input passed to `factory.get` is actually the dictionary that we get after loading the recipe
(See [Using Existing Factories](#using-existing-factories))
(See [Utilizing Existing Factories](#utilizing-existing-factories))

### Recommended
Factories become even more powerful when used with the `@resolve_param` decorator.
Expand All @@ -152,7 +153,7 @@ It means you can pass either the actual python object or a dictionary that descr
```python
class ImageNetDataset(torch_datasets.ImageFolder):

@resolve_param("transform", factory=TransformsFactory())
@resolve_param("transforms", factory=TransformsFactory())
def __init__(self, root: str, transform: Transform):
...
```
Expand Down Expand Up @@ -209,3 +210,15 @@ from super_gradients.common.factories import (
register_processing,
)
```

### Conclusion

In this tutorial, we have delved into the realm of factories, encompassing:
- **Using Existing Factories**: How SuperGradients automatically instantiates objects defined in recipes.
- **Registering New Classes**: The method to map object names to corresponding class types, and how to integrate them in your recipes.
- **Under the Hood**: Insights into basic and recommended ways to use factories, as well as the variety of supported factory types within SuperGradients.

These insights provide essential understanding and practical techniques to work with factories, a core element in SuperGradients that bridges the gap between configuration and instantiation.

**Next Step**: Ready to craft your unique recipes? In the [next tutorial](Recipes_Custom.md),
we'll guide you through building your own recipe and training a model based on that recipe.
154 changes: 154 additions & 0 deletions documentation/source/Recipes_Training.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,154 @@
# Training Recipes

Recipes aim at providing a simple interface to easily reproduce trainings.

**Prerequisites**
- [Introduction to Configuration Files](configuration_files.md)


## Training from a Recipe

As explained in our [introduction to configuration files](configuration_files.md), SuperGradients uses the `hydra`
library combined with `.yaml` recipes to allow you to easily customize the parameters.

The basic syntax to train a model from a recipe is a follows
```bash
python -m super_gradients.train_from_recipe --config-name=<config-name>
```
With `<config-name>` corresponding to the name of the recipe.

You can find all of the pre-defined recipes in [super_gradients/recipes](https://github.com/Deci-AI/super-gradients/tree/master/src/super_gradients/recipes).
Recipes usually contain information about their performance, as well as the command to execute them in the header.

### Examples
- Training of Resnet18 on Cifar10: [super_gradients/recipes/cifar10_resnet.yaml](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/cifar10_resnet.yaml)
```bash
python -m super_gradients.train_from_recipe --config-name=cifar10_resnet
```

- Training of YoloX Small on COCO 2017 (8 GPUs): [super_gradients/recipes/coco2017_yolox](https://github.com/Deci-AI/super-gradients/blob/master/src/super_gradients/recipes/coco2017_yolox.yaml)
```bash
python -m super_gradients.train_from_recipe --config-name=coco2017_yolox architecture=yolox_s dataset_params.data_dir=/home/coco2017
```


## Customize Training
You may often need to modify certain parameters within a recipe and there are 2 approaches for this:
1. Using hydra overrides.
2. Modifying the recipe.


### 1. Hydra Overrides

Hydra overrides allow you to change parameters directly from the command line.
This approach is ideal when want to quickly experiment changing a couple of parameters.

Here's the general syntax:

```bash
python -m super_gradients.train_from_recipe --config-name=<config-name> param1=<val1> path.to.param2=<val2>
```

- **Parameters** - Listed without the `--` prefix.
- **Full Path** - Use the entire path in the configuration tree, with each level separated by a `.`.


#### Example
Suppose your recipe looks like this:
```yaml
training_hyperparams:
max_epochs: 250
initial_lr: 0.1
...

dataset_params:
data_dir: /local/mydataset
...

... # Many other parameters
```

Changing Epochs or Learning Rate
```bash
python -m super_gradients.train_from_recipe --config-name=<config-name> training_hyperparams.max_epochs=250 training_hyperparams.initial_lr=0.03
```

Changing the Dataset Path
```bash
python -m super_gradients.train_from_recipe --config-name=<config-name> dataset_params.data_dir=<path-to-dataset>
```

> Note: Parameter names may differ between recipes, so please check the specific recipe to ensure you're using the correct names.

### 2. Modifying the Recipe
If you are working on a cloned version of SuperGradients (`git clone ...`)
then you can directly modify existing recipes.

If you installed SuperGradients with pip, then you won't have the ability to modify predefined recipes.
Instead, you should create your own recipe in your project, but you will still have the ability to build it on top of predefined recipes from SuperGradients.

We explain all of this in a [following tutorial](Recipes_Custom.md), but we strongly recommend you to
first finish this tutorial, as it includes information required to fully understand how it works.


## Recipe Structure
When browsing the YAML files in the `recipes` directory, you'll notice that some files contain the key `defaults` at the beginning of the file. Here's an example of what this looks like:

```yaml
defaults:
- training_hyperparams: cifar10_resnet_train_params
- dataset_params: cifar10_dataset_params
- arch_params: resnet18_cifar_arch_params
- checkpoint_params: default_checkpoint_params
- _self_

...
```

### Components of a Recipe

- **Defaults**: The `defaults` section is critical, and it leverages the OmegaConf syntax. It serves to reference other recipes, allowing you to create modular and reusable configurations.
- **Referencing Parameters**: This allows you to point to specific parameters in the YAML file according to where they originate. For example, `training_hyperparams.initial_lr` refers to the `initial_lr` parameter from the `cifar10_resnet_train_params.yaml` file.
- **Recipe Parameters - `_self_`**: The `_self_` keyword has a special role. It permits the current recipe to override the defaults. Its impact depends on its position in the `defaults` list.

### Understanding Override Order

> 🚨 **Warning**: The order of items in the `defaults` section is significant! The overwrite priority follows the list order, meaning that a config defined higher in the list can be overwritten by one defined lower in the list. This is a vital aspect to be aware of when constructing recipes. For a more detailed explanation, please refer to the [official documentation](https://hydra.cc/docs/tutorials/basic/your_first_app/defaults/#composition-order-of-primary-config).
### Organizing Your Recipe Folder

Your recipe folder should have a specific structure to match this composition:

```
├─ cifar10_resnet.yaml
├─ ...
├─training_hyperparams
│ ├─ cifar10_resnet_train_params.yaml
│ └─ ...
├─dataset_params
│ ├─ cifar10_dataset_params.yaml
│ └─ ...
├─arch_params
│ ├─ resnet18_cifar_arch_params.yaml
│ └─ ...
└─checkpoint_params
├─ default_checkpoint_params.yaml
└─ ...
```

You're not restricted to this structure, but following it ensures compatibility with SuperGradients' expectations.


## Conclusion

This tutorial has introduced you to the world of training recipes within SuperGradients. Specifically, you've learned:
- **How to Train Models**: Utilizing `.yaml` recipes to effortlessly train and customize models.
- **Ways to Customize Training**: Tailoring your training through hydra overrides or direct modifications to the recipes.
- **Understanding Recipe Structure**: Grasping the organization and conventions that help you align with SuperGradients' expectations.

We've laid the groundwork for understanding how recipes enable flexible and reproducible training.

**Next Step**: In the [next tutorial](Recipes_Factories.md), we'll explore factories in SuperGradients,
revealing how they work with recipes to dynamically instantiate objects. It's a critical step in leveraging the
full power of SuperGradients for your unique needs.
Loading

0 comments on commit e9283c9

Please sign in to comment.