Skip to content

Commit

Permalink
Feature/sg 000 proparage fixes from master (#1577)
Browse files Browse the repository at this point in the history
* [Improvement] max_batches support to training log and tqdm progress bar. (#1554)

* Added max_batches support to training log and tqdm progress bar.

* Added changing string in accordance which parameter is used (len(loader) of max_batches)

* Replaced stopping condition for the epoch with a smaller one

(cherry picked from commit 749a9c7)

* fix (#1558)

Co-authored-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
(cherry picked from commit 8a1d255)

* fix (#1564)

(cherry picked from commit 24798b0)

* Bugfix of model.export() to work correct with bs>1 (#1551)

(cherry picked from commit 0515496)

* Fixed incorrect automatic variable used (#1565)

$@ is the name of the target being generated, and $^ are the dependencies

Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
(cherry picked from commit 43f8bea)

* fix typo in class documentation (#1548)

Co-authored-by: Eugene Khvedchenya <ekhvedchenya@gmail.com>
Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
(cherry picked from commit ec21383)

* Feature/sg 1198 mixed precision automatically changed with warning (#1567)

* fix

* work with tmpdir

* minor change of comment

* improve device_config

(cherry picked from commit 34fda6c)

* Fixed issue with torch 1.12 where _scale_fn_ref is missing in CyclicLR (#1575)

(cherry picked from commit 23b4f7a)

* Fixed issue with torch 1.12 issue with arange not supporting fp16 for CPU device. (#1574)

(cherry picked from commit 1f15c76)

---------

Co-authored-by: hakuryuu96 <marchenkophilip@gmail.com>
Co-authored-by: Louis-Dupont <35190946+Louis-Dupont@users.noreply.github.com>
Co-authored-by: Alessandro Ros <aler9.dev@gmail.com>
  • Loading branch information
4 people committed Oct 26, 2023
1 parent c9c368d commit e8a51d3
Show file tree
Hide file tree
Showing 23 changed files with 668 additions and 286 deletions.
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ NOTEBOOKS = src/super_gradients/examples/model_export/models_export.ipynb

# This Makefile target runs notebooks listed below and converts them to markdown files in documentation/source/
run_and_convert_notebooks_to_docs: $(NOTEBOOKS)
jupyter nbconvert --to markdown --output-dir="documentation/source/" --execute $@
jupyter nbconvert --to markdown --output-dir="documentation/source/" --execute $^

# This Makefile target runs notebooks listed below and converts them to markdown files in documentation/source/
check_notebooks_version_match: $(NOTEBOOKS)
python tests/verify_notebook_version.py $@
python tests/verify_notebook_version.py $^
35 changes: 22 additions & 13 deletions documentation/source/models_export.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,15 @@ A new export API is introduced in SG 3.2.0. It is aimed to simplify the export p
- Customising NMS parameters and number of detections per image
- Customising output format (flat or batched)


```python
!pip install super_gradients==3.3.1
```

ERROR: Could not find a version that satisfies the requirement super_gradients==3.3.1 (from versions: 1.3.0, 1.3.1, 1.4.0, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 2.0.0, 2.0.1, 2.1.0, 2.2.0, 2.5.0, 2.6.0, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.0.4, 3.0.5, 3.0.6, 3.0.7, 3.0.8, 3.0.9, 3.1.0, 3.1.1, 3.1.2, 3.1.3, 3.2.0, 3.2.1, 3.3.0)
ERROR: No matching distribution found for super_gradients==3.3.1


### Minimalistic export example

Let start with the most simple example of exporting a model to ONNX format.
Expand Down Expand Up @@ -203,9 +212,9 @@ pred_boxes, pred_boxes.shape
[ 35.71795, 249.40926, 176.62216, 544.69794],
[182.39618, 249.49301, 301.44122, 529.3324 ],
...,
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]]], dtype=float32),
[ -1. , -1. , -1. , -1. ],
[ -1. , -1. , -1. , -1. ],
[ -1. , -1. , -1. , -1. ]]], dtype=float32),
(1, 1000, 4))


Expand All @@ -219,8 +228,8 @@ pred_scores, pred_scores.shape



(array([[0.9694027, 0.9693378, 0.9665707, 0.9619047, 0.7538769, ...,
0. , 0. , 0. , 0. , 0. ]],
(array([[ 0.9694027, 0.9693378, 0.9665707, 0.9619047, 0.7538769, ...,
-1. , -1. , -1. , -1. , -1. ]],
dtype=float32),
(1, 1000))

Expand All @@ -235,8 +244,8 @@ pred_classes, pred_classes.shape



(array([[0, 0, 0, 0, 0, 0, 0, 0, 2, 2, ..., 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
dtype=int64),
(array([[ 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, ..., -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1]], dtype=int64),
(1, 1000))


Expand Down Expand Up @@ -295,7 +304,7 @@ show_predictions_from_batch_format(image, result)



![png](models_export_files/models_export_18_0.png)
![png](models_export_files/models_export_19_0.png)



Expand Down Expand Up @@ -411,7 +420,7 @@ show_predictions_from_flat_format(image, result)



![png](models_export_files/models_export_24_0.png)
![png](models_export_files/models_export_25_0.png)



Expand Down Expand Up @@ -447,7 +456,7 @@ show_predictions_from_flat_format(image, result)



![png](models_export_files/models_export_26_0.png)
![png](models_export_files/models_export_27_0.png)



Expand Down Expand Up @@ -481,7 +490,7 @@ show_predictions_from_flat_format(image, result)



![png](models_export_files/models_export_28_0.png)
![png](models_export_files/models_export_29_0.png)



Expand Down Expand Up @@ -522,12 +531,12 @@ result = session.run(outputs, {inputs[0]: image_bchw})
show_predictions_from_flat_format(image, result)
```

25%|█████████████████████████████████████████████████ | 4/16 [00:11<00:34, 2.90s/it]
25%|██████████████████████████████ | 4/16 [00:11<00:33, 2.79s/it]




![png](models_export_files/models_export_30_1.png)
![png](models_export_files/models_export_31_1.png)



Expand Down
16 changes: 15 additions & 1 deletion src/super_gradients/common/environment/device_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,24 @@ def _get_assigned_rank() -> int:

@dataclasses.dataclass
class DeviceConfig:
device: str = "cuda" if torch.cuda.is_available() else "cpu"
_device: str = "cuda" if torch.cuda.is_available() else "cpu"
multi_gpu: str = None
assigned_rank: int = dataclasses.field(default=_get_assigned_rank(), init=False)

@property
def device(self) -> str:
return self._device

@device.setter
def device(self, value: str):
if "cuda" in value and not torch.cuda.is_available():
raise ValueError("CUDA is not available, cannot set device to cuda")
self._device = value

@property
def is_cuda(self):
return "cuda" in self._device


# Singleton holding the device information
device_config = DeviceConfig()
2 changes: 1 addition & 1 deletion src/super_gradients/conversion/conversion_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
(torch.int16, np.int16),
(torch.int8, np.int8),
(torch.uint8, np.uint8),
(torch.bool, np.bool),
(torch.bool, bool),
]


Expand Down
Loading

0 comments on commit e8a51d3

Please sign in to comment.